前言

My Playbook

1. Jenkins

1.1. 安装

安装
# 先用代理启动,Jenkins初始化需要安装许多外网插件,插件安装完后去掉代理,正常启动
java -jar -DsocksProxyHost=192.168.1.107 -DsocksProxyPort=1090 jenkins.war
启动
# 后台启动
nohup java -jar jenkins.war 2>&1 &

1.2. 插件

1. git parameter

2. Gitlab

2.1. 安装

文档地址

curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | bash

EXTERNAL_URL="http://192.168.43.65:8000"  yum install -y gitlab-ce
官方文档的下载地址速度比较慢,可以使用清华源下载: 镜像地址
重新配置(恢复默认配置) 重启服务
gitlab-ctl reconfigure && gitlab-ctl restart

安装完毕后访问: http:/192.168.43.65:8000

cat /etc/gitlab/initial_root_password

默认登录用户 root,密码第一次访问会提示修改

2.2. Nginx反代GitLab

安装GitLab之后,通过单独的Nginx反代,暴露给外网访问。由于Nginx配置和GitLab内部的访问端口不同,为保证功能正常使用,需要修改GitLab配置

2.2.1. GitLab设置

# 禁用默认配置
sed -i 's/^external_url /#external_url /g' /etc/gitlab/gitlab.rb

cat << EOF >> /etc/gitlab/gitlab.rb
# 设置gitlab对外暴露的地址
external_url 'https://git.xxx.com:2035'

# Git SSH Clone相关参数
gitlab_rails['gitlab_ssh_host'] = 'git.xxx.com'

# 禁用自动签发免费HTTPS证书
# letsencrypt['enable'] = false

# 禁用签发证书后,需要显式启用Nginx服务
nginx['enable'] = true
# 禁用Nginx的HTTPS配置
nginx['redirect_http_to_https'] = false
nginx['listen_https'] = false

# Nginx端口监听配置
nginx['listen_addresses'] = ['127.0.0.1']
nginx['listen_port'] = 8000

git_data_dirs({
    "default" => {
        "path" => "/data/gitlab_data"
    }
})
EOF

# 重新生成配置并重启
gitlab-ctl reconfigure && gitlab-ctl restart

# 确认Nginx服务状态
ss -antpl|grep 8000

2.2.2. Nginx Server配置

cat <<EOF> /etc/nginx/conf.d/gitlab.conf
server {
    listen 2035 ssl http2;
    server_name git.xxx.com;

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    ssl_certificate /etc/letsencrypt/live/xxx.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xxx.com/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
    ssl_dhparam /etc/letsencrypt/live/xxx.com/dhparam.pem;

    # intermediate configuration. tweak to your needs.
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
    ssl_prefer_server_ciphers on;

    # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
    add_header Strict-Transport-Security max-age=15768000;

    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;

    # 替换URL,解决README.md之类无法加载的问题
    sub_filter 'http://git.xxx.com'  'https://git.xxx.com';
    sub_filter_once off;
    sub_filter_last_modified on;
    # 替换 text/html application/json HTTP响应包中的内容
    sub_filter_types application/json;

    location / {
        auth_basic           "Administrator’s Area";
        auth_basic_user_file /etc/nginx/.htpasswd;

        # 必须设置,否则新建项目后,会302跳转到http的80端口
        proxy_set_header Host git.xxx.com:2035;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-Proto https;

        proxy_pass http://127.0.0.1:8000;
    }

    location ~ ^/\.git/? {
        return 404;
    }
}
EOF
生效Nginx配置
nginx -t && nginx -s reload

2.2.3. 设置HTTP Clone相关参数

用管理员帐号登录GitLab,进入以下菜单:

Admin Area > Settings > General > Visibility and access controls

找到 Custom Git clone URL for HTTP(S),根据实际情况填写,比如:https://git.xxx.com:2035

2.2.4. 最后

外网GitLab访问地址 https://git.xxx.com

SSH克隆地址 ssh://git@git.xxx.com/用户名/项目名称.git

2.3. GitLab邮件服务

配置
cat << EOF >> /etc/gitlab/gitlab.rb
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.qq.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "邮箱@qq.com"
gitlab_rails['smtp_password'] = "开通smtp时返回的授权码"
gitlab_rails['smtp_domain'] = "qq.com"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true

user['git_user_email'] = "邮箱@qq.com"
gitlab_rails['gitlab_email_from'] = '邮箱@qq.com'
EOF

# 重新生成配置并重启
gitlab-ctl reconfigure && gitlab-ctl restart
测试
执行命令gitlab-rails console测试发邮件,进入控制台之后执行命令:

Notify.test_email('xx@qq.com', 'title', 'content').deliver_now

2.4. 常用命令

# 启动
gitlab-ctl start

# 重启
gitlab-ctl restart

# 停止
gitlab-ctl start

# 查看所有的logs; 按 Ctrl-C 退出
gitlab-ctl tail

# 拉取/var/log/gitlab下子目录的日志
gitlab-ctl tail gitlab-rails

# 拉取某个指定的日志文件
gitlab-ctl tail nginx/gitlab_error.log

# 更新配置文件
gitlab-ctl reconfigure

# 检查gitlab配置
gitlab-rake gitlab:check SANITIZE=true --trace

# 查看版本
cat /opt/gitlab/embedded/service/gitlab-rails/VERSION

3. Maven

3.1. 安装

下载
wget -c https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz
解压
tar xf apache-maven-3.8.6-bin.tar.gz -C /usr/local

mv /usr/local/apache-maven-3.8.6 /usr/local/maven
配置环境变量
#添加环境变量
ln -s /usr/local/maven/bin/mvn /usr/local/bin/

#查看版本
mvn -v
配置镜像源
cat << EOF > ~/.m2/settings.xml
<settings>
  <mirrors>
    <mirror>
      <id>aliyun</id>
      <name>Aliyun Central</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
      <mirrorOf>central</mirrorOf>
    </mirror>
  </mirrors>
</settings>
EOF

3.2. 使用

常用命令
41.2.1. Build Lifecycle
mvn clean
清除

mvn compile
编译,compile the source code of the project

mvn validate
验证mvn配置,validate the project is correct and all necessary information is available

mvn test
运行单元测试,test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed

mvn verify
验证编译后的程序,run any checks on results of integration tests to ensure quality criteria are met

mvn install
安装应用到maven目录,install the package into the local repository, for use as a dependency in other projects locally

mvn deploy
编译,done in the build environment, copies the final package to the remote repository for sharing with other developers and projects.

mvn package
打包,take the compiled code and package it in its distributable format, such as a JAR.

https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html

mvn compile war:war
Build a WAR file.

mvn war:exploded
Create an exploded webapp in a specified directory.

https://maven.apache.org/plugins/maven-war-plugin/plugin-info.html

启动springboot
mvn spring-boot:run

使用Java启动jar包
java -jar target/accessing-data-jpa-0.0.1-SNAPSHOT.jar

4. Git

4.1. 基本命令

4.1.1. 文件操作

从暂存区内删除文件
git rm --cache 文件名
取消暂存区内的文件
git restore --staged 文件名
添加远程仓库
git remote add foo git@github.com:foo/bar.git
只删除远程仓库文件
# 删除你要删除的文件名称,这里是删除target文件夹(cached不会把本地的flashview文件夹删除)
git rm -r --cached target

4.1.2. 分支管理

查看分支
git branch
创建分支
git branch foo
删除
git branch --delete foo
切换分支
git checkout foo
创建分支同时切换过去
git checkout -b foo

5. XXL-JOB

6. Libvirt

6.1. 安装配置

6.1.1. 安装kvm环境

# Archlinux
sudo pacman -S virt-manager qemu-system-x86 vde2 ebtables dnsmasq bridge-utils openbsd-netcat qemu-base virt-viewer
# lsmod | grep kvm

kvm_intel       138567  0
kvm             441119  1 kvm_intel
sudo systemctl enable --now libvirtd

# 设置端口转发
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p
配置网络
sudo virsh net-define /etc/libvirt/qemu/networks/default.xml
sudo virsh net-start default
sudo virsh net-autostart default
验证
$ ip a
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d6:75:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

6.2. 安装虚拟机

6.2.1. 准备镜像文件

qemu-img create -f qcow2 /home/ylighgh/libvirt/images/ubuntu-22.04.img 10G

6.2.2. 增加虚拟机

virt-install \
--name ubuntu-22.04 \
--ram 4096 \
--disk path=/home/ylighgh/libvirt/images/ubuntu-22.04.img,size=10 \
--vcpus 2 \
--os-type linux \
--os-variant ubuntu22.04 \
--network bridge=virbr0 \
--console pty,target_type=serial \
--cdrom=/home/ylighgh/Downloads/ubuntu-22.04.1-live-server-amd64.iso \
--graphics vnc,password=123123,port=15424,listen=0.0.0.0

# 从已经存在的镜像文件(tpl_win2k12r2.img)创建虚拟机
virt-install \
--name tpl_win2k12r2 \
--ram 4048 \
--disk path=/data/libvirt/images/tpl_win2k12r2.img \
--vcpus 4 \
--os-type windows \
--os-variant win2k12r2 \
--network bridge=virbr0 \
--console pty,target_type=serial \
--import \
--graphics vnc,password=aC8W5It9nOyrXchH,port=-1,listen=0.0.0.0

6.3. 常用命令

6.3.1. 删除虚拟机

virsh destroy foo
virsh undefine foo

6.3.2. 增加硬盘

# 准备磁盘文件
qemu-img create -f qcow2 web-add.qcow2 2G

# 临时添加
virsh attach-disk foo /opt/web-add.qcow2 vdb --subdriver qcow2

# 永久添加
virsh attach-disk foo /opt/web-add.qcow2 vdb --subdriver qcow2  --config

# 分离磁盘
virsh detach-disk foo vdb
查看磁盘信息
virsh domblklist foo

6.3.3. 快照

创建快照
sudo virsh snapshot-create-as \
--domain foo \
--name foo_snapshot \
--description "first snapshot" \
--atomic
查看快照
virsh snapshot-list --domain foo
删除快照
virsh snapshot-delete --domain foo foo_snapshot
还原快照
virsh snapshot-revert --domain foo --snapshotname foo_snapshot
开机自启
virsh autostart foo

6.3.4. 网卡

增加网卡
virsh attach-interface foo --type bridge --source virbr0

virsh attach-interface foo --type bridge --source virbr0 --config
删除网卡
# 查看网卡
virsh domiflist foo

[yinlei@archlinux ~]$ virsh domiflist foo
 接口   类型     源       型号     MAC
------------------------------------------------------
 tap0   bridge   virbr0   virtio   52:54:00:98:7a:ce
 tap1   bridge   virbr0   virtio   52:54:00:cb:80:54
 tap2   bridge   virbr0   virtio   52:54:00:8d:87:ae

 # 删除指定网卡
virsh detach-interface foo bridge 52:54:00:8d:87:ae
网络
# 重启网络
virsh net-destroy default

# 开启
virsh net-strat default

6.4. 镜像文件扩容

增加镜像文件大小
# 增加2G
qemu-img resize /home/ylighgh/workspace/libvirt/images/qemu-add.qcow2 +2G

确认容量

qemu-img info qemu-add.qcow2

$ qemu-img info qemu-add.qcow2
image: qemu-add.qcow2
file format: qcow2
virtual size: 4 GiB (4294967296 bytes)
disk size: 2.2 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
重建分区
TIP

如果没有分区,跳过此步

  1. 查看分区

parted -s /dev/vdb unit s pr

输出

ylighgh@kvm:~$ sudo parted -s /dev/vdb unit s pr
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 8388608s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start  End       Size      Type     File system  Flags
 1      2048s  4194303s  4192256s  primary  ext4

其中8388608s为扇区总大小
  1. 添加分区设备映射

kpartx -a /dev/vdb
  1. 获取分区设备映射文件名

ls /dev/mapper/vdb*

ylighgh@kvm:~$ sudo ls /dev/mapper/vdb*
/dev/mapper/vdb1
  1. 检查文件 Ext 文件系统

e2fsck -fy /dev/mapper/vdb1
  1. 转换 Ext3/4 为 Ext2

移除 Ext3/4 日志后,相当于 Ext2。

ext3 → ext2, ext3 - journal = ext2

ext4 → (ext2 + ext4 features), ext4 - journal = ext2 + ext4 features

tune2fs -O ^has_journal /dev/mapper/vdb1
  1. 删除分区设备映射

kpartx -d /dev/vdb
  1. 删除待扩容分区

parted -s /dev/vdb rm 1
  1. 重建分区

镜像尾部保留一小部分空间,扇区总数量 - 3000 = 8385608

parted -s /dev/vdb unit s mkpart primary ext4 2048 8385608s
  1. 确认分区

parted -s /dev/vdb unit s pr
输出:
# parted -s /dev/vdb unit s pr
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 8388608s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start  End       Size      Type     File system  Flags
 1      2048s  8385608s  8383561s  primary  ext2
  1. 设置启动分区

如果处理的分区之前是启动分区,扩容后必须重新设置启动参数

parted -s /dev/vdb set 1 boot on
重建文件系统
TIP

如果没有文件系统,跳过此步

添加分区设备映射
kpartx -a /dev/vdb
Ext2/3/4
  1. 调整 Ext 文件系统大小

resize2fs -f /dev/mapper/vdb1
  1. 增加 Ext3 日志到文件系统

Ext2 + journal = ext3

tune2fs -j /dev/mapper/vdb1

或者

tune2fs -O has_journal /dev/mapper/vdb1
  1. 检查文件 Ext 文件系统

e2fsck -fy /dev/mapper/vdb1
删除分区设备映射
kpartx -d /dev/vdb
验证
# parted -s /dev/vdb unit s pr
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 8388608s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start  End       Size      Type     File system  Flags
 1      2048s  8385608s  8383561s  primary  ext4

6.5. CentOS9 Libvirtd无法自启

解决办法

cat <<EOF> /usr/lib/systemd/system/libvirtd-auto-start.service
[Unit]
Description=Auto start libvirtd service
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl start libvirtd
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable libvirtd-auto-start.service

7. Docker

7.1. 安装

向脚本文件追加内容
cat << EOF > docker_install.sh
#Uninstall old versions
yum -y remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

#SET UP THE REPOSITORY
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#INSTALL DOCKER ENGINE
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable docker
systemctl start docker

#配置镜像加速
mkdir -p /etc/docker
cat << EOFF > /etc/docker/daemon.json
{
  "registry-mirrors": [
    "https://registry.docker-cn.com",
    "https://hub-mirror.c.163.com"
  ],
  "data-root": "/var/lib/docker",
  "storage-driver": "overlay2",
  "dns" : [
    "223.5.5.5",
    "223.6.6.6"
  ]
}
EOFF

#配置端口转发
echo 1 > /proc/sys/net/ipv4/ip_forward

#重载Docker,使配置生效
systemctl daemon-reload
systemctl restart docker
EOF
给脚本授予执行权限
chmod +x docker_install.sh
执行脚本
sh docker_install.sh
测试

执行: docker run hello-world

[ylighgh@docker ~]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

7.2. 基本使用

拉取镜像
docker pull tomcat
运行
docker run -itd --name tomcat test/tomcat:v1.0 /bin/bash
批量删除容器
docker rm -f $(docker ps -qa)
自动启动
--restart参数=
	no
		默认策略,在容器退出时不重启容器
	on-failure
		在容器非正常退出时(退出状态非0),才会重启容器
	on-failure:3
		在容器非正常退出时重启容器,最多重启3次
	always
		在容器退出时总是重启容器
#开机自启
	unless-stopped
		在容器退出时总是重启容器,但是不考虑在Docker守护进程启动时就已经停止了的容器
# 一般推荐使用always参数
	--restart=always
将正在运行的容器设为自启动
# docker update --restart=always 容器名或容器ID
docker update --restart=always <CONTAINER ID>
# 例如将tomcat设为自启动
docker update --restart=always tomcat
将自启动的容器取消自启动
# docker update --restart=no 容器名或容器ID
docker update --restart=no <CONTAINER ID>
# 例如取消tomcat的自启动
docker update --restart=no tomcat
导入导出容器
# 导出容器
docker export tomcat > tomcat.tar.gz

# 导入容器
cat tomcat.tar.gz |docker import - test/tomcat:v1
提交容器
docker commit -m "描述信息" -a="作者" 容器ID xxx

7.3. 私服镜像

下载Docker Registry
docker pull registry
运行
# 默认情况,仓库创建在容器里的/var/lib/registry目录下,建议自行使用容器卷映射,方便于宿主机联调
docker run -d -p 5000:5000 -v $HOME/myregistry/:/tmp/registry --privileged=true --restart=always registry
发送GET请求验证
curl -XGET http://localhost:5000/v2/_catalog
修改tag名
docker tag myubunyu:v1 192.168.43.205:5000/myubuntu:v1.1
修改配置文件使之支持http
sed -i '2s/$/,/' /etc/docker/daemon.json
sed -i '2 a \"insecure-registries\":[\"192.168.43.205:5000\"]' /etc/docker/daemon.json
重启Docker
systemctl daemon-reload
systemctl restart docker
推送镜像到私服库
docker push 192.168.43.205:5000/myubuntu:v1.1
测试拉取
docker pull 192.168.43.205:5000/myubuntu:v1.1
结果
[root@nginx ~]# docker images
REPOSITORY                                              TAG       IMAGE ID       CREATED         SIZE
192.168.43.205:5000/myubuntu                            v1.1      8ee699689cc3   2 hours ago     111MB
registry.cn-hangzhou.aliyuncs.com/ylighgh/myubuntu_v1   1.1       f909f58557c6   5 hours ago     178MB
nginx                                                   latest    605c77e624dd   5 months ago    141MB
tomcat                                                  latest    fb5657adc892   6 months ago    680MB
registry                                                latest    b8604a3fe854   7 months ago    26.2MB
ubuntu                                                  latest    ba6acccedd29   8 months ago    72.8MB
hello-world                                             latest    feb5d9fea6a5   9 months ago    13.3kB
redis                                                   6.0.8     16ecd2772934   20 months ago   104MB

7.4. 容器数据卷

挂载文件

需要持久化的数据: 日志 配置文件 业务数据 临时缓存数据

docker run -it --privileged=true -v /宿主机绝对路径:/容器内目录 镜像名
容器卷继承
docker run -it --privileged=true -volumes-from 容器名1 --name 容器名2 镜像名

7.5. 程序启动命令汇总

7.5.1. Tomcat

docker run -d -p 8080:8080 --name mytomcat8 billygoo/tomcat8-jdk8

7.5.2. Redis

准备配置文件
mkdir -p /docker/redis/

wget https://raw.githubusercontent.com/antirez/redis/6.0.8/redis.conf -O /docker/redis/redis.conf
启动
docker run -p 6379:6379 --privileged=true \
-v /docker/redis/redis.conf:/etc/redis/redis.conf \
-v /docker/redis/data:/data \
-v /etc/localtime:/etc/localtime \
-d redis:6.0.8 redis-server /etc/redis/redis.conf

7.5.3. MySQL

修改默认字符
cat <<EOF> /docker/mysql/conf/my.cnf
[client]
default_character_set = utf8
[mysqld]
collation_server = utf8_general_ci
character_set_server = utf8
EOF
启动
docker run -p 3306:3306 --privileged=true \
-v /docker/mysql/log:/var/log/mysql \
-v /docker/mysql/data:/var/lib/mysql \
-v /docker/mysql/conf:/etc/mysql/conf.d \
-v /etc/localtime:/etc/localtime \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-d mysql:5.7

7.6. MySQL主从

7.6.1. 主节点配置

环境准备
mkdir -p /docker/mysql-master/conf
启动MySQL-master
docker run -d -p 3307:3306 --name mysql-master --privileged=true \
-v /docker/mysql-master/log:/var/log/mysql \
-v /docker/mysql-master/data:/var/lib/mysql \
-v /docker/mysql-master/conf:/etc/mysql/conf.d \
-v /etc/localtime:/etc/localtime \
-e MYSQL_ROOT_PASSWORD=123456 \
-d mysql:5.7
配置文件
cat <<EOF> /docker/mysql-master/conf/my.cnf
[client]
default_character_set = utf8
[mysqld]
# 设置字符集
collation_server = utf8_general_ci
character_set_server = utf8
# 设置server_id(唯一)
server_id=101
# 指定不需要同步的数据库
binlog-ignore-db = mysql
# 记录操作日志
log-bin=mysql-bin
# 设置二进制日志使用内存大小
binlog_cache_size=1M
# 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
# 二进制日志过期清理时间
expire_logs_days=7
# 忽略错误 主键重复
slave_skip_errors=1062
EOF

#### 重启使配置文件生效
docker restart mysql-master
数据库配置
# 进入容器内部
docker exec -it mysql-master bash
# 执行命令
mysql -uroot -p123456 -e 'grant replication slave on *.* to slave@"%" identified by "123456";flush privileges;show master status;'

7.6.2. 从节点配置

环境准备
mkdir -p /docker/mysql-slave/conf
启动MySQL-slave
docker run -d -p 3308:3306 --name mysql-slave --privileged=true \
-v /docker/mysql-slave/log:/var/log/mysql \
-v /docker/mysql-slave/data:/var/lib/mysql \
-v /docker/mysql-slave/conf:/etc/mysql/conf.d \
-v /etc/localtime:/etc/localtime \
-e MYSQL_ROOT_PASSWORD=123456 \
-d mysql:5.7
配置文件
cat <<EOF> /docker/mysql-slave/conf/my.cnf
[client]
default_character_set = utf8
[mysqld]
# 设置字符集
collation_server = utf8_general_ci
character_set_server = utf8
# 设置server_id(唯一)
server_id=102
# 指定不需要同步的数据库
binlog-ignore-db = mysql
# 记录操作日志
log-bin=mysql-slave1-bin
# 设置二进制日志使用内存大小
binlog_cache_size=1M
# 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
# 二进制日志过期清理时间
expire_logs_days=7
# 忽略错误 主键重复
slave_skip_errors=1062
# relay_log配置中继日志
relay_log=mysql-relay-bin
# log_slave_updates表示将slave复制事件写进自己的二进制日志
log_slave_updates=1
# slave设置为只读
read_only=1
EOF

### 重启mysql使配置文件生效
docker restart mysql-slave
数据库配置
# 进入容器内部
docker exec -it mysql-slave bash
# 执行命令
mysql -uroot -p123456 -e 'change master to master_host="IP",master_user="slave",master_password="123456",master_port=3307,master_log_file="xxxx",master_log_pos=xxxx;start slave;flush privileges;'

7.7. DockerFile

构建
docker build -d 新镜像名字:TAG .
CentOS7JDK11脚本
FROM centos:centos7
MAINTAINER ylighgh<yssuvu@gmail.com>

ENV MYPATH /usr/local
WORKDIR $MYPATH

RUN yum -y update
RUN yum -y install vim
RUN yum -y install net-tools
RUN mkdir /usr/local/java
ADD jdk-11.0.15.1_linux-x64_bin.tar.gz /usr/local/java
ENV JAVA_HOME /usr/local/java/jdk-11.0.15.1
ENV CLASSPATH $JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
ENV PATH $JAVA_HOME/bin:$PATH


CMD echo $MYPATH
CMD echo "sucess------ok"
CMD /bin/bash

7.8. Docker网络

新建桥接网络
docker network create my_network
查看网络
docker network ls

[root@nginx ~]# docker network  ls
NETWORK ID     NAME         DRIVER    SCOPE
09b399066c7f   bridge       bridge    local
e6561971bc9b   host         host      local
ba98f0f4f1df   my_network   bridge    local
e50b1a449286   none         null      local
将容器加入至网络
docker run -d -p 8081:8080 --network my_network --name tomcat81 billygoo/tomcat8-jdk8

7.9. Compose

7.9.1. 安装

# 下载
curl -SL https://github.com/docker/compose/releases/download/v2.6.1/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
# 授予执行权限
chmod +x /usr/local/bin/docker-compose
# 创建软链接
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# 检测
docker-compose --version

7.9.2. Jenkins

mkdir -p /usr/local/docker/jenkins_docker/data
chmod -R 777 /usr/local/docker/jenkins_docker/data
cat <<EOF> /usr/local/docker/jenkins_docker/docker-compose
version: "3.1"
services:
   jenkins:
       image: jenkins/jenkins:2.319.1-lts
       container_name: jenkins
       ports:
          - 8080:8080
          - 50000:50000
       volumes:
          - ./data:/var/jenkins_home/
EOF
cd /usr/local/docker/jenkins_docker && docker-compose up -d

7.9.3. SonarQube

echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
mkdir -p /usr/local/docker/sonarqube_docker
chmod -R /usr/local/docker/sonarqube_docker
cat <<EOF> /usr/local/docker/sonarqube_docker/docker-compose
version: "3.1"
services:
  db:
    image: postgres
    container_name: db
    ports:
      - 5432:5432
    networks:
      - sonarnet
    environment:
      POSTGRES_USER: sonar
      POSTGRES_PASSWORD: sonar
  sonarqube:
    image: sonarqube:8.9.6-community
    container_name: sonarqube
    depends_on:
      - db
    ports:
      - 9000:9000
    networks:
      - sonarnet
    environment:
      SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar
      SONAR_JDBC_USERNAME: sonar
      SONAR_JDBC_PASSWORD: sonar
networks:
  sonarnet:
    driver: bridge
EOF
cd /usr/local/docker/sonarqube_docker && docker-compose up -d

7.10. Kafla

拉取镜像
docker pull zookeeper:latest
docker pull wurstmeister/kafka:latest
启动zookeeper
docker run -d --name zookeeper --publish 2181:2181 --volume /etc/localtime:/etc/localtime zookeeper:latest
启动kafka
docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper服务的地址:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka地址:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 wurstmeister/kafka
创建topic
bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic k8s-apisix-logs
列出topic
bin/kafka-topics.sh --list --zookeeper kafka地址:2181
发送消息
bin/kafka-console-producer.sh --broker-list kafka地址:9092 --topic test
消费消息
bin/kafka-console-consumer.sh --bootstrap-server kafka地址:9092 --topic test --from-beginning

7.11. ELK

ElasticSearch
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.13.4
docker run  -d --name es01 --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -t docker.elastic.co/elasticsearch/elasticsearch:8.13.4

export ELASTIC_PASSWORD="f27lQZGV0BFntOMplyc2"

docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .

curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
Kibana
docker pull docker.elastic.co/kibana/kibana:8.13.4
docker run -d --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.13.4

8. MySQL

8.1. 安装

安装 MySQL
yum install -y mariadb-server
备份 my.cnf
cp /etc/my.cnf /etc/my.cnf.default
修改 my.cnf
cat << EOF > /etc/my.cnf
[client]
default_character_set = utf8
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
collation_server = utf8_general_ci
character_set_server = utf8
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
max_allowed_packet=20M
max_heap_table_size = 100M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
tmp_table_size = 100M
# 查询缓存
#query_cache_limit=4M
#query_cache_type=on
#query_cache_size=2G
#bind-address = 127.0.0.1
# 跳过主机名解析,比如localhost,foo.com之类,加速访问
skip-name-resolve
# SQL执行日志
general_log=off
general_log_file=/var/log/mariadb/general.log
# SQL慢查询日志
slow_query_log=off
slow_query_log_file=/var/log/mariadb/slowquery.log
long_query_time = 5
max_connections = 1000
# 兼容老MySQL代码,比如使用空字符串代替NULL插入数据
sql_mode = ""
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
EOF
配置 mysqldump 命令参数
sed -i '16 aquick\nquote-names\nmax_allowed_packet = 100M' /etc/my.cnf.d/mysql-clients.cnf
创建日志文件
touch /var/log/mariadb/general.log /var/log/mariadb/slowquery.log
chown mysql:mysql /var/log/mariadb/general.log /var/log/mariadb/slowquery.log
增加开机启动
systemctl enable mariadb
启动 MySQL 服务
systemctl start mariadb
查看 MySQL 服务状态
# systemctl status mariadb
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-11-29 14:18:12 CST; 1h 7min ago
  Process: 16688 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 16653 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 16687 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─16687 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─17043 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.lo...
Nov 29 14:18:10 iZ6weebcmroarpx8rrxscrZ systemd[1]: Starting MariaDB database server...
Nov 29 14:18:10 iZ6weebcmroarpx8rrxscrZ mariadb-prepare-db-dir[16653]: Database MariaDB is probably initialized in /var/lib/mysql already, nothing is done.
Nov 29 14:18:11 iZ6weebcmroarpx8rrxscrZ mysqld_safe[16687]: 191129 14:18:11 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
Nov 29 14:18:11 iZ6weebcmroarpx8rrxscrZ mysqld_safe[16687]: 191129 14:18:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Nov 29 14:18:12 iZ6weebcmroarpx8rrxscrZ systemd[1]: Started MariaDB database server.
# ss -antpl|grep mysql
LISTEN     0      50     127.0.0.1:3306                     *:*                   users:(("mysqld",pid=17043,fd=14))
修改root密码
mysqladmin -uroot password "geek"
删除测试数据库和空密码用户
mysql -uroot -pgeek -e 'show databases;'
mysql -uroot -pgeek -e 'drop database test;'
mysql -uroot -pgeek mysql -e 'delete from db;'
mysql -uroot -pgeek mysql -e 'delete from user where Password="";'
mysql -uroot -pgeek -e 'flush privileges;'

8.2. 常用命令

8.2.1. 创建用户、开放指定HOST登录

创建用户
# 创建任意host可登陆账户
CREATE USER yss@'%' IDENTIFIED BY '123456' ;

# 创建指定host可登陆账户
CREATE USER yss@'10.0.0.2/%' IDENTIFIED BY '123456' ;
修改密码
# 用户为自己更改密码
SET PASSWORD=PASSWORD('456789');

# Root用户为其他用户改密码
SET PASSWORD FOR yss@'%'=PASSWORD('456789');

8.2.2. 用户授权

查看当前系统有哪些用户
SELECT user FROM mysql.user;
查看指定用户的权限
SHOW GRANTS FOR user_name@'%';
用户授权
# 给已存在用户授权
GRANT ALL ON db_name.table_name TO yss@'%';

# 创建用户并授权
GRANT ALL ON db_name.table_name TO yss@'%' IDENTIFIED BY '123456';
取消用户授权
REVOKE [SELECT,DELETE,UPDATE,DROP...] ON db_name.table_name FROM yss@'%';

8.2.3. 备份与还原

备份
mysqldump -uusername -p'password' --add-drop-table db_name > db_name_dump.sql
还原
mysql -uusername -p'password' db_name < db_name_dump.sql
  • 备份多个库: --databases 库1,库2

  • 备份所有库: --all-databases

  • 备份多个表: 库名 表1 表2

8.2.4. 创建数据库时指定字符集

CREATE DATABASE db_name DEFAULT CHARACTER SET utf8mb4 DEFAULT COLLATE utf8mb4_unicode_ci;

8.2.5. 字符集

查看本机字符集
show variables like 'character%';

8.3. MySQL集群

主节点

10.0.2.20

从节点

10.0.2.30

8.3.1. 主从备份

主节点配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
#记录操作日志
log-bin=mysql-bin
#不同步mysql系统数据库
binlog-ignore-db = mysql
#数据库集群的节点id
server-id=20
修改系统配置
# 重启数据库服务,使配置文件生效
systemctl restart mariadb
# 在主节点创建一个slave用户连接节点mysql2,并赋予从节点同步主节点数据库权限
mysql -uroot -pgeek -e 'grant replication slave on *.* to slave@"10.0.2.30" identified by "geek";flush privileges;'

从节点配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
log-bin=mysql-bin
replicate-ignore-db=mysql
server-id=30
修改系统配置

在主节点使用 show master stauts; 查询

master_log_filemaster_log_pos

systemctl restart mariadb

mysql -uroot -pgeek -e 'change master to master_host="10.0.2.20",master_user="slave",master_password="geek",master_log_file="xxxx",master_log_pos=xxxx;start slave;flush privileges;'
验证从节点服务是否开启

在从节点服务器执行命令: mysql -uroot -pgeek -e 'show slave status\G'`,如果 Slave_IO_RunningSlave_SQL_Running 的状态都为YES,则节点服务开启成功

[root@ylighgh ~]# mysql -uroot -pgeek -e'show slave status\G'
*************************** 1. row ***************************
            ...
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ...

8.3.2. 主主备份

主节点配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-ignore-db=mysql
binlog-ignore-db=information_scheme
server-id=20
修改系统配置

使用 show master stauts; 查询信息

修改 master_log_filemaster_log_pos

systemctl restart mariadb
mysql -uroot -pgeek -e 'grant replication slave on *.* to slave@"10.0.2.30" identified by "geek";flush privileges;'

mysql -uroot -pgeek -e 'change master to master_host="10.0.2.30",master_user="slave",master_password="geek",master_log_file="xxxx",master_log_pos=xxxx;start slave;flush privileges;'

从节点配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-ignore-db=mysql
binlog-ignore-db=information_scheme
server-id=30
修改系统配置

使用 show master stauts; 查询信息

修改 master_log_filemaster_log_pos

systemctl restart mariadb
mysql -uroot -pgeek -e 'grant replication slave on *.* to slave@"10.0.2.20" identified by "geek";flush privileges;'


mysql -uroot -pgeek -e 'change master to master_host="10.0.2.20",master_user="slave",master_password="geek",master_log_file="xxxx",master_log_pos=xxxx;start slave;flush privileges;'
验证主从节点服务是否开启

在主从节点服务器执行命令: mysql -uroot -pgeek -e 'show slave status\G'`,如果 Slave_IO_RunningSlave_SQL_Running 的状态都为YES,则节点服务开启成功

[root@ylighgh ~]# mysql -uroot -pgeek -e'show slave status\G'
*************************** 1. row ***************************
            ...
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ...

8.3.3. 多主一从

主节点1

10.0.2.20

主节点2

10.0.2.30

从节点

10.0.2.40

主节点1配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-ignore-db=mysql
binlog-ignore-db=information_scheme
server-id=20
重启MySQL
systemctl restart mariadb
修改系统配置
mysql -uroot -pgeek -e 'grant replication slave on *.* to slave@"10.0.2.40" identified by "geek";flush privileges;'

主节点2配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-ignore-db=mysql
binlog-ignore-db=information_scheme
server-id=30
重启MySQL
systemctl restart mariadb
修改系统配置
mysql -uroot -pgeek -e 'grant replication slave on *.* to slave@"10.0.2.40" identified by "geek";flush privileges;'

从节点配置

修改 my.cnf 文件
vim /etc/my.cnf
[mysqld_multi]
mysqld=/usr/bin/mysqld_safe
mysqladmin=/usr/bin/mysqladmin
log=/tmp/multi.log

[mysqld20]
port=3307
datadir=/var/lib/mysqla/
pid-file=/var/lib/mysqla/mysqld.pid
socket=/var/lib/mysqla/mysql.sock
user=mysql
server-id=40

[mysqld30]
port=3308
datadir=/var/lib/mysqlb/
pid-file=/var/lib/mysqlb/mysqld.pid
socket=/var/lib/mysqlb/mysql.sock
user=mysql
server-id=40
初始化数据库,生成目录

mysqla mysqlb

mysql_install_db --datadir=/var/lib/mysqla --user=mysql
mysql_install_db --datadir=/var/lib/mysqlb --user=mysql

chown -R mysql:mysql /var/lib/mysqla
chown -R mysql:mysql /var/lib/mysqlb
启动MySQL
mysqld_multi --defaults-file=/etc/my.cnf start 20
mysqld_multi --defaults-file=/etc/my.cnf start 30
查看状态
[root@slave2-2022 ~]# ss -antpl|grep mysql
LISTEN     0      50           *:3307                     *:*                   users:(("mysqld",pid=6964,fd=14))
LISTEN     0      50           *:3308                     *:*                   users:(("mysqld",pid=7925,fd=14))
修改从节点配置

3308

mysql -uroot -P 3307 -S /var/lib/mysqla/mysql.sock

change master to master_user='slave',master_password='geek',master_host='10.0.2.20',master_log_file='mysql-bin.000014',master_log_pos=1312;start slave;

3307

mysql -uroot -P 3308 -S /var/lib/mysqlb/mysql.sock

change master to master_user='slave',master_password='geek',master_host='10.0.2.30',master_log_file='mysql-bin.000011',master_log_pos=629;start slave;

9. Redis

9.1. 安装配置

安装
yum -y install redis
启动
systemctl start redis
设置开机自启
systemctl enable redis
查看redis状态
[root@master-2022 system]# systemctl status redis
● redis.service - Redis persistent key-value database
   Loaded: loaded (/usr/lib/systemd/system/redis.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/redis.service.d
           └─limit.conf
   Active: active (running) since Mon 2022-05-30 05:31:28 CST; 1s ago
  Process: 22952 ExecStop=/usr/libexec/redis-shutdown (code=exited, status=0/SUCCESS)
 Main PID: 22982 (redis-server)
   CGroup: /system.slice/redis.service
           └─22982 /usr/bin/redis-server 127.0.0.1:6379

May 30 05:31:28 master-2022 systemd[1]: Starting Redis persistent key-val....
May 30 05:31:28 master-2022 systemd[1]: Started Redis persistent key-valu....
Hint: Some lines were ellipsized, use -l to show in full.
[root@master-2022 system]# ss -antpl|grep redis
LISTEN     0      128    127.0.0.1:6379                     *:*                   users:(("redis-server",pid=22982,fd=4))

9.2. 常用命令

9.2.1. 键(Key)

# 设置key值
set name yss

# 获取key对应的value值
get name

# 删除一个已经创建的key
del name

# 设置key值,如果已经存在则返回0
setnx name yss

# 判断key是否存在,存在返回1,不存在返回0
exists name

# 为给定 key 设置过期时间,以秒计
expire name

# 查找所有符合给定模式( pattern)的 key
keys rts*

# 删除一个或多个键
del area_info_a area_info_b

# 删除模糊匹配的键
redis-cli --scan --pattern users:* | xargs redis-cli unlink

9.2.2. 字符串(String)

# 同时设置一个或多个 key-value 对
mset key1 value1 [key2 value2]

# 获取所有(一个或多个)给定 key 的值
mget key1 [key2]

# 将 key 中储存的数字值增一
incr key
# 将 key 中储存的数字值减一
decr key

# 将 key 所储存的值加上给定的增量值(increment)
incrby key increment
# key 所储存的值减去给定的减量值(decrement)
decrby key decrement

9.2.3. 哈希表(Hash)

增加字段或设置字段
# 向哈希表中增加字段或设置字段值
hset area_info name "四川"

# 向哈希表中增加多个字段或设置多个字段值
hmset area_info name "四川" level 2
获取字段
# 获取哈希表中指定键的单个字段和值
hget area_info name

# 获取所有给定字段的值
hmget area_info name level

# 获取哈希表中指定键的所有字段和值
hgetall area_info
字段是否存在
# 查看哈希表的指定字段是否存在
hexists area_info name
删除字段
# 删除一个或多个哈希表字段
hdel area_info name level

9.2.4. 列表(List)

# 将一个或多个值插入到列表头部
lpush key_list value1 value2
lpush key_list value3

# 获取列表长度
llen key_list


LRANGE runoobkey 0 10

# 移出并获取列表的第一个元素
lpop key_list

9.2.5. 集合(Set)

# 添加一个或多个元素到集合中
sadd key member [members...]

# 获取集合里面所有的元素
smembers key

# 从指定集合中删除指定元素
srem key member [members]

# 从指定集合中删除随意删除一个元素并返回
spop key

# 统计集合中元素的个数
scard key

# 返回指定的差集
sdiff key [key...]

# 返回指定集合的交集
sinter key [key...]

# 获取指定集合的并集
sunion key [key...]

9.3. 高级应用

9.3.1. 密码防护

修改密码
sed -i '480s/#//' /etc/redis.conf
sed -i '480s/foobared/geek/' /etc/redis.conf

# 修改密码之后重启redis
systemctl restart redis

# 登陆
redis-cil -a PASSWORD --row
或
auth PASSWORD

9.3.2. 主从备份

主服务器设置

sed -i '61s/^/# /' /etc/redis.conf

systemctl restart redis

从服务器设置

sed -i '265aslaveof 10.0.2.20 6379' /etc/redis.conf

sed -i '273amasterauth geek' /etc/redis.conf

systemctl restart redis

10. RabbitMQ

11. Kafka

12. Nginx

12.1. 安装

增加 Nginx 官方源
cat << EOF > /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF

EPEL 源中的 nginx.service 由于 KILL 参数问题,启动后无法停止,不建议使用。

安装Nginx
yum install -y nginx
备份Nginx配置文件
echo y|cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.default
修改 nginx.conf
cat << EOF > /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

worker_rlimit_nofile 65535;

events {
    worker_connections 65535;
}

http {
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    log_format  main  escape=json '\$host \$server_port \$remote_addr - \$remote_user [\$time_local] "\$request" '
                      '\$status \$request_time \$body_bytes_sent "\$http_referer" '
                      '"\$http_user_agent" "\$http_x_forwarded_for"';


    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    server_names_hash_bucket_size 128;
    server_name_in_redirect off;
    client_header_buffer_size 32k;
    large_client_header_buffers 4 32k;

    client_header_timeout  3m;
    client_body_timeout    3m;
    client_max_body_size 50m;
    client_body_buffer_size 256k;
    send_timeout           3m;

    gzip  on;
    gzip_min_length  1k;
    gzip_buffers     4 16k;
    gzip_http_version 1.0;
    gzip_comp_level 2;
    gzip_types image/svg+xml application/x-font-wof text/plain text/xml text/css application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript text/javascript;
    gzip_vary on;

    proxy_redirect off;
    proxy_set_header Host \$host;
    proxy_set_header X-Real-IP \$remote_addr;
    proxy_set_header REMOTE-HOST \$remote_addr;
    proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    proxy_connect_timeout 60;
    proxy_send_timeout 60;
    proxy_read_timeout 60;
    proxy_buffer_size 256k;
    proxy_buffers 4 256k;
    proxy_busy_buffers_size 256k;
    proxy_temp_file_write_size 256k;
    proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
    proxy_max_temp_file_size 128m;
    #让代理服务端不要主动关闭客户端的连接,协助处理499返回代码问题
    proxy_ignore_client_abort on;

    fastcgi_buffer_size 64k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;

    index index.html index.htm index.php default.html default.htm default.php;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
}
EOF
增加默认Host
mkdir /etc/nginx/conf.d

cat << EOF > /etc/nginx/conf.d/default.conf
server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;
    root         /usr/share/nginx/html;

    # Load configuration files for the default server block.
    include /etc/nginx/default.d/*.conf;

    location / {
    }

    error_page 404 /404.html;
        location = /40x.html {
    }

    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }
}
EOF
检测配置
nginx -t && rm -f /var/run/nginx.pid

nginx -t 之后,/var/run/nginx.pid 空文件会一直被保留,而 nginx.service 并不能处理 PIDFile 为空的情况,导致启动失败。

需要手动删除 /var/run/nginx.pid

from nginx/1.16.1

启动Nginx
systemctl start nginx
查看Nginx状态
# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-11-29 14:02:31 CST; 1h 18min ago
 Main PID: 15759 (nginx)
   CGroup: /system.slice/nginx.service
           ├─15759 nginx: master process /usr/sbin/nginx
           └─17285 nginx: worker process

Nov 29 14:02:31 iZ6weebcmroarpx8rrxscrZ systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 29 14:02:31 iZ6weebcmroarpx8rrxscrZ nginx[15753]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Nov 29 14:02:31 iZ6weebcmroarpx8rrxscrZ nginx[15753]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Nov 29 14:02:31 iZ6weebcmroarpx8rrxscrZ systemd[1]: Failed to parse PID from file /run/nginx.pid: Invalid argument
Nov 29 14:02:31 iZ6weebcmroarpx8rrxscrZ systemd[1]: Started The nginx HTTP and reverse proxy server.


# ss -antpl|grep nginx
LISTEN     0      128          *:80                       *:*                   users:(("nginx",pid=17285,fd=6),("nginx",pid=15759,fd=6))
LISTEN     0      128         :::80                      :::*                   users:(("nginx",pid=17285,fd=7),("nginx",pid=15759,fd=7))
增加开机启动
systemctl enable nginx

12.2. 常用功能

12.2.1. 反向代理

location / {
    proxy_pass http://10.0.2.30;
}

12.2.2. 负载均衡

upstream nameserver {
    server 10.0.2.20:80;
    server 10.0.2.30:80;
}

server{
    location / {
    # 添加反向代理,代理地址填写upstream声明的名字
    proxy_pass http://nameserver;
    # 重写请求头部,保证网站所有页面都能访问成功
    proxy_set_header Host $host;
    }
}
加权轮询 server 10.0.2.20:80 weight=1

13. Rsync

13.1. 数据传输

下行同步(下载)
rsync -avz 服务器地址:/服务器目录/* /本地目录
上行同步(上传)
rsync -avz /本地目录/* 服务器地址:/服务器目录/
参数解释
-a: 归档模式,递归保留对象属性
-v: 显示同步过程
-z: 在传输时进行压缩
创建数据同步用户
useradd test
echo "testupload" | passwd test --stdin

# 设置acl权限
setfacl -m u:test:rwx /filesrc

13.2. 实时同步

rsync+inotify

inotify安装
wget  http://github.com/downloads/rvoicilas/inotify-tools/inotify-tools-3.14.tar.gz

tar -xf inotify-tools-3.14.tar.gz

cd inotify-tools-3.14

yum -y install gcc*

./configure && make && make install
同步脚本
cat <<EOF> rsync.sh
#!/bin/bash
a="inotifywait -mrq -e create,delete,modify /filesrc"
b="rsync -avz /filesrc/* root@192.168.88.20:/filedest"
\$a | while read directory event file
do
    \$b
done
EOF

chmod +x rsync.sh

13.3. 客户端服务器模式

服务端
cat <<EOF> /etc/rsync.conf
port = 52050
pid file = /var/log/rsync/rsyncd.pid
lock file = /var/log/rsync/rsync.lock
log file = /var/log/rsync/rsyncd.log
use chroot = false
strict modes = false
hosts allow = 127.0.0.1,192.168.122.225
ignore errors = true
read only = true
list = yes
max connections = 10
auth users = whoareyou
secrets file = /var/log/rsync/rsyncd.pass
uid = root
gid = root

[delete]
path = /tmp/.snapshots/create_and_delete-snap/
EOF

cat <<EOF> /var/log/rsync/rsyncd.pass
whoareyou:123456
EOF

# 开启
rsync --daemon

# 验证
suse:~ # ss -antpl|grep 52050
LISTEN 0      5            0.0.0.0:52050      0.0.0.0:*    users:(("rsync",pid=14431,fd=5))
LISTEN 0      5               [::]:52050         [::]:*    users:(("rsync",pid=14431,fd=6))
客户端
cat <<EOF> ~/workspace/shell/rsync.sh
#! /bin/bash
echo >> /var/log/rsync.log
echo 同步开始于  `date +%F%t%T` >> /var/log/rsync.log

log_file=/var/log/rsync_download.log
time=`date +%F%t%T`

cmd="rsync -abvcz --port=52050 --progress --delete --backup-dir=/www/master_bak/conf/change/`date +%Y%m%d` --password-file=/var/log/rsync/rsync.pass --log-file=${log_file} whoareyou@rsync_server::delete /www/master_bak/conf/source/"
echo ${time} $cmd>>/var/log/rsync.log  2>&1
$cmd


echo 同步结束于  `date +%F%t%T`>>/var/log/rsync.log

echo >> /var/log/rsync.log
echo >> /var/log/rsync.log
echo >> /var/log/rsync.log
echo >> /var/log/rsync.log
EOF

cat <<EOF> /var/log/rsync/rsync.pass
123456
EOF

14. Zabbix

14.1. 安装

配置zabbix 5.0的源
rpm -Uvh https://mirrors.aliyun.com/zabbix/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm

sed -i 's#http://repo.zabbix.com#https://mirrors.aliyun.com/zabbix#g' /etc/yum.repos.d/zabbix.repo

yum clean all
yum makecache
安装Zabbix-server
yum -y install zabbix-server-mysql zabbix-agent
安装SCL
yum -y install centos-release-scl
安装Zabbix前端环境
sed -i '11s#0#1#' /etc/yum.repos.d/zabbix.repo

yum -y install zabbix-web-mysql-scl zabbix-apache-conf-scl
安装数据库

10.1

创建数据库
mysql -uroot -pgeek -e "create database zabbix character set utf8 collate utf8_bin;create user 'zabbix'@'localhost' identified by 'zabbix';grant all privileges on zabbix.* to 'zabbix'@'localhost';flush privileges;"
导入zabbix初始数据
zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -pzabbix zabbix
修改zabbix配置文件
#配置数据库密码
sed -i '124s# DBPassword=#DBPassword=zabbix#' /etc/zabbix/zabbix_server.conf
sed -i '124s/#DBPassword/DBPassword/' /etc/zabbix/zabbix_server.conf
#配置php时区
sed -i '25s#Europe/Riga#Asia/Shanghai#' /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf
sed -i '25s/; php/php/' /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf
启动zabbix服务
systemctl start zabbix-server zabbix-agent httpd rh-php72-php-fpm
设置开机自启
systemctl enable zabbix-server zabbix-agent httpd rh-php72-php-fpm
访问

访问: http://your_ip/zabbix 进行zabbix安装

数据库密码: zabbix

账户: Admin

密码: zabbix

处理乱码

Windows :直接将windows字体文件上传至服务器的 /usr/share/zabbix/assets/fonts 文件夹下

Linux : scp /usr/share/fonts/WindowsFonts/方正粗黑宋简体.ttf root@192.168.43.57:/usr/share/zabbix/assets/fonts/graphfont.ttf

14.2. 卸载

完全卸载Zabbix
# 找到zabbix的安装包
rpm -qa|grep zabbix

# 卸载zabbix
yum -y remove 包名

# 删除文件目录
find / -name zabbix -exec rm {} +

# 删除zabbix数据库
mysql -uroot -pgeek -e 'drop database zabbix;'

14.3. 添加主机

配置zabbix 5.0的源
rpm -Uvh https://mirrors.aliyun.com/zabbix/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm

sed -i 's#http://repo.zabbix.com#https://mirrors.aliyun.com/zabbix#g' /etc/yum.repos.d/zabbix.repo

yum clean all
yum makecache
安装
yum -y install zabbix-agent
配置
sed -i '117s/127.0.0.1/zabbix_server_ip/' /etc/zabbix/zabbix_agentd.conf
启动
systemctl start zabbix-agent
开机自启
systemctl enable zabbix-agent
zabbix网页配置:配置—​主机—​创建主机,模板---Liunx模板

15. Inotify (待处理)

16. Iptables

16.1. 安装

安装
yum install -y iptables-services

# 停止firewalld
systemctl  stop  firewalld
# 禁用firewalld
systemctl mask firewalld.service
# iptables 开机自起
systemctl enable iptables.service
# 删除所有的链条和规则
iptables -F

iptables-save >/etc/sysconfig/iptables

16.2. 基本使用

禁用ICMP包
iptable -A INPUT -p icmp -j DROP
查看防火墙规则
iptable -L -n --line-numbers
[root@aliyun ~]# iptables -L -n --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DROP       icmp --  0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
删除规则
iptables -D INPUT 1
保存防火墙规则
# 任何改动之前先备份
cp /etc/sysconfig/iptables /etc/sysconfig/iptables.bak
iptables-save > /etc/sysconfig/iptables
cat /etc/sysconfig/iptables

17. Tcpdump

18. ElasticStack

18.1. ELK环境配置

CentOS7

安装Java
yum install -y java-11-openjdk java-11-openjdk-devel java-11-openjdk-headless
增加YUM源
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat << EOF > /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF

18.1.1. ElasticSearch

安装
yum --disablerepo="*" --enablerepo="elasticsearch" install -y elasticsearch
配置
sed -i '17s/#cluster.name: my-application/cluster.name: myapp/' /etc/elasticsearch/elasticsearch.yml
sed -i '23s/#node.name: node-1/node.name: node-1/' /etc/elasticsearch/elasticsearch.yml
sed -i '56s/#network.host: 192.168.0.1/network.host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
sed -i '70s/#discovery.seed_hosts: .*/discovery.seed_hosts: ["0.0.0.0\/0"]/' /etc/elasticsearch/elasticsearch.yml
sed -i '74s/#cluster.initial_master_nodes: .*/cluster.initial_master_nodes: ["node-1"]/' /etc/elasticsearch/elasticsearch.yml
开机启动
systemctl enable elasticsearch
启动服务
systemctl start elasticsearch

18.1.2. Kibana

安装
yum --disablerepo="*" --enablerepo="elasticsearch" install -y kibana
配置
sed -i '7s/#server.host: "localhost"/server.host: "0.0.0.0"/' /etc/kibana/kibana.yml
sed -i '32s/#elasticsearch.hosts: .*/elasticsearch.hosts: ["http:\/\/0.0.0.0:9200"]/' /etc/kibana/kibana.yml
sed -i '115s/#i18n.locale: "en"/i18n.locale: "en"/' /etc/kibana/kibana.yml
开机启动
systemctl enable kibana
启动服务
systemctl start kibana
最后

如果系统IP为 172.24.109.12,则访问 http://172.24.109.12:5601/

18.1.3. LogStash

安装
yum --disablerepo="*" --enablerepo="elasticsearch" install -y logstash
测试
cat <<EOF> /etc/logstash/conf.d/test.conf
input { stdin { } }

output {
  elasticsearch { hosts => ["127.0.0.1:9200"] }
  stdout { codec => rubydebug }
}
EOF
1. 用命令行方式启动logstash

/usr/share/logstash/bin/logstash  -f /etc/logstash/conf.d/test.conf


2. 通过stdin输入,观察logstash。如输入:“this is a test”,会有如下输出
{
      "@version" => "1",
       "message" => "this is test",
          "host" => "slave-2022",
    "@timestamp" => 2022-05-26T13:49:06.697Z
}

3. 查看索引 curl -s "127.0.0.1:9200/_cat/indices?v"
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-2022.05.27-000001      jUvsxcK1RVSUv1EVvSZqVw   1   1          1            0      4.6kb          4.6kb

4.发现有一个新索引logstash-2022.05.27-000001,查看新索引的内容
curl -XGET '127.0.0.1:9200/logstash-2022.05.27-000001/_doc/_search/?pretty'


5.删除测试文件
rm -f /etc/logstash/conf.d/test.conf

18.2. Nginx日志分析

Nginx日志格式
log_format  main  '$host $server_port $remote_addr - $remote_user [$time_local] "$request" '
                '$status $request_time $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';

18.2.1. FileBeat实时导入

Filebeat会把日志传送到Logstash,Logstash再把日志传到ES

安装
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.4-x86_64.rpm
rpm -vi filebeat-7.17.4-x86_64.rpm
配置
sed -i '7s/#server.host: "localhost"/server.host: "0.0.0.0"/' /etc/kibana/kibana.yml
sed -i '28s/false/true/' /etc/filebeat/filebeat.yml
sed -i '32s/*.log/nginx\/access*.log/' /etc/filebeat/filebeat.yml
sed -i '135s/^/#/' /etc/filebeat/filebeat.yml
sed -i '137s/^/#/' /etc/filebeat/filebeat.yml
sed -i '148s/#//' /etc/filebeat/filebeat.yml
sed -i '150s/#//' /etc/filebeat/filebeat.yml
sed -i '67s/#//' /etc/filebeat/filebeat.yml
sed -i '150s/localhost:5044/127.0.0.1:5400/' /etc/filebeat/filebeat.yml
开机启动
systemctl enable filebeat
启动服务
systemctl start filebeat
Nginx日志格式
log_format  main   '$host $server_port $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $request_time $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
验证
cat <<EOF> /etc/logstash/conf.d/nginx-es.conf
input {
        beats {
                host => "0.0.0.0"
                port => 5400	# 对应在filebeat的配置中,output到logstash的5400端口
        }
}
filter{
        grok{
                match => { "message" => "%{IPORHOST:host_ip} %{NUMBER:server_port} %{IPORHOST:remote_ip} - %{DATA:remote_user} \[%{HTTPDATE:time_local}\] \"%{WORD:request_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:status} %{NUMBER:request_time} %{NUMBER:body_bytes_sent} \"%{DATA:http_referrer}\" \"%{DATA:http_user_agent}\"" }
        }
        date {
                match => [ "time_local", "dd/MMM/yyyy:HH:mm:ss Z" ]
        }
}
output {
        elasticsearch {
                hosts => ["0.0.0.0:9200"]
                index => "nginx_es-%{+YYYY.MM.dd}"
        }
}
EOF

# 检查配置文件语法是否有错误
/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/nginx-es.conf/nginx-es.conf
启动logstash服务
systemctl start logstash
查看索引
curl -s "127.0.0.1:9200/_cat/indices?v"
yellow open   nginx_es-2022.05.27     RPDzQQbARqWXNzaCvxsI2w   1   1         33            0     32.6kb         32.6kb
检查ES内容,发现ES已自动创建索引nginx_es-2022.05.27并且已经有Nginx日志内容
curl -XGET '127.0.0.1:9200/nginx_es-2022.05.27/_doc/_search/?pretty'

18.2.2. Nginx日志普通导入

准备日志文件

日志文件存放路径: ~/es_log/nginx_logs

配置文件
mkdir -p ~/es_log/nginx_logs

cat << EOF > ~/es_log/logstash.conf
input{
  file {
    path => "${HOME}/es_log/nginx_logs/access.log*"
    type => "nginx_access"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    stat_interval => "1 second"
  }
}
filter{
        grok{
                match => { "message" => "%{IPORHOST:host_ip} %{NUMBER:server_port} %{IPORHOST:remote_ip} - %{DATA:remote_user} \[%{HTTPDATE:time_local}\] \"%{WORD:request_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:status} %{NUMBER:request_time} %{NUMBER:body_bytes_sent} \"%{DATA:http_referrer}\" \"%{DATA:http_user_agent}\"" }
        }
        date {
                match => [ "time_local", "dd/MMM/yyyy:HH:mm:ss Z" ]
        }
}
output {
        elasticsearch {
                hosts => ["0.0.0.0:9200"]
                index => "nginx_es_log-%{+YYYY.MM.dd}"
        }
        #file  {
        #    path => "/var/log/logstash.log"
        #    codec => json
        #}
}
EOF
导入数据到ES:
/usr/share/logstash/bin/logstash -f ~/es_log/logstash.conf

19. Ceph

19.1. 安装

添加依赖
yum -y install python-setuptools
添加Ceph仓库
CEPH_STABLE_RELEASE=nautilus

cat  << EOF > /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-${CEPH_STABLE_RELEASE}/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-${CEPH_STABLE_RELEASE}/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-${CEPH_STABLE_RELEASE}/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
EOF
创建部署目录
mkdir -p ~/ceph-cluster && cd ~/ceph-cluster
安装ceph-deploy
yum -y install ceph-deploy
创建集群
ceph-deploy new node1 node2 node3 --public-network 10.0.2.0/24 --cluster-network 10.0.2.0/24
安装ceph
# 自动安装
ceph-deploy install node1 node2 node3

# 手动安装
yum -y install ceph ceph-radosgw
设置 MON 和 KEY
ceph-deploy mon create-initial
TIP

说明:执行完成后会在部署目录 ~/ceph-cluster/` 下生成以下 keyring 文件

  • ceph.bootstrap-mds.keyring

  • ceph.bootstrap-mgr.keyring

  • ceph.bootstrap-osd.keyring

  • ceph.bootstrap-rgw.keyring

  • ceph.client.admin.keyring

将 ceph.client.admin.keyring 拷贝到各个节点上
ceph-deploy --overwrite-conf admin node1 node2 node3
TIP

拷贝之后的文件在节点上的 /etc/ceph/

安装MGR
ceph-deploy mgr create node1 node2 node3
启动OSD
# 擦除硬盘
ceph-deploy disk zap node1 /dev/sdb
ceph-deploy disk zap node2 /dev/sdb
ceph-deploy disk zap node3 /dev/sdb

# 创建osd节点
ceph-deploy osd create node1 --fs-type xfs --data /dev/sdb
ceph-deploy osd create node2 --fs-type xfs --data /dev/sdb
ceph-deploy osd create node3 --fs-type xfs --data /dev/sdb
修改ceph.conf
cat << EOF >> ~/ceph-cluster/ceph.conf
mon_clock_drift_allowed = 2
mon_clock_drift_warn backoff = 30
EOF

ceph config set mon mon_warn_on_insecure_global_id_reclaim false
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false

ceph-deploy --overwrite-conf config push node{1,2,3}

systemctl restart ceph-mon.target
验证
[root@node1 ceph-cluster]# ceph -s
  cluster:
    id:     a7074991-7a98-42b1-a517-891782210587
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node1,node2,node3 (age 22s)
    mgr: node1(active, since 2m), standbys: node2, node3
    osd: 3 osds: 3 up (since 119s), 3 in (since 119s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 13 GiB / 16 GiB avail
    pgs:

19.2. 文件存储服务

启用MDS服务
ceph-deploy mds create node1 node2 node3
创建文件存储池

建两个存储池,cephfs_metadata 用于存文件系统元数据,cephfs_data 用于存文件系统数据

ceph osd pool create cephfs_metadata 32 32
ceph osd pool create cephfs_data 32 32

# 启用
ceph osd pool application enable cephfs_metadata cephfs
ceph osd pool application enable cephfs_data cephfs
创建文件系统
ceph fs new koenlifs cephfs_metadata cephfs_data
验证挂载
mkdir -p /ceph/cephfs

# 方式一 mount
mount -t ceph node1:6789,node2:6789,node3:6789:/ /ceph/cephfs -o name=admin,secret=AQAZUhdj3ruKHhAAt5E5YdYhfGXUpnorM0VBDw==


# 方式二 ceph-fuse
yum -y install ceph-fuse

ceph-fuse -n client.admin -m node1:6789,node2:6789,node3:6789 /ceph/cephfs


[root@node2 cephfs]# df -h
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                        988M     0  988M   0% /dev
tmpfs                                          1000M     0 1000M   0% /dev/shm
tmpfs                                          1000M  8.6M  991M   1% /run
tmpfs                                          1000M     0 1000M   0% /sys/fs/cgroup
/dev/mapper/centos-root                          17G  1.8G   16G  11% /
/dev/sda1                                      1014M  136M  879M  14% /boot
tmpfs                                           200M     0  200M   0% /run/user/0
tmpfs                                          1000M   52K 1000M   1% /var/lib/ceph/osd/ceph-1
10.0.2.10:6789,10.0.2.20:6789,10.0.2.30:6789:/  3.8G     0  3.8G   0% /ceph/cephfs
取消挂载
umount /ceph/cephfs
TIP

secret 在 /etc/ceph/ceph.client.admin.keyring

20. Btrfs

20.1. 基本使用

写入文件系统
mkfs.btrfs /path/to/device

# 将多个硬盘合并成一个文件系统
mkfs.btrfs -L mydata /dev/vdd /dev/vde
挂载
mount /dev/vdc1 /backup/

# 指定选项挂载(man 5 btrfs)
mount -o option1,option2 /dev/vdc1 /backup/
查看大小
btrfs filesystem show /mydata

suse:~ # btrfs filesystem show /mydata/
Label: 'mydata'  uuid: f38e8548-402c-42be-8f1e-4bcd4d275cf5
        Total devices 2 FS bytes used 192.00KiB
        devid    1 size 2.00GiB used 272.00MiB path /dev/vdd
        devid    2 size 2.00GiB used 264.00MiB path /dev/vde
扩缩容
btrfs filesystem resize 2:[-|+|max]1G /mydata/
suse:~ # btrfs filesystem resize 2:-1G /mydata/
Resize device id 2 (/dev/vde) from 2.00GiB to 1.00GiB
suse:~ # btrfs filesystem show /mydata/
Label: 'mydata'  uuid: f38e8548-402c-42be-8f1e-4bcd4d275cf5
        Total devices 2 FS bytes used 192.00KiB
        devid    1 size 2.00GiB used 272.00MiB path /dev/vdd
        devid    2 size 1.00GiB used 264.00MiB path /dev/vde
添加/删除磁盘至btrfs
btrfs device add /dev/vdf /mydata

btrfs device delete /dev/vdf /mydata
转化
# 将ext4转换为btrfs
btrfs-convert /path/to/device

# 将btrfs回退至之前的文件系统
btrfs-convert -r /path/to/device
创建子卷
btrfs subvolume create xxx
查看子卷信息
btrfs subvolume show xxx
创建子卷快照
btrfs subvolume snapshot -r xxx xxx-bak
挂载子卷
mount -o [subvol=xxx|subvolid=xxx] /dev/vdd /mnt/logs

20.2. 增量备份

  • /data (source side)

  • /backup/data (target side)

初始化
# 测试文件
dd if=/dev/zero of=/data/test bs=1G count=1

btrfs sub snap -r /data /data/bkp_data && sync

btrfs send /data/bkp_data | btrfs receive /backup
增量备份
# 测试文件
touch /data/test2

btrfs subvolume snapshot -r /data /data/bkp_data-2 && sync

btrfs send -p /data/bkp_data /data/bkp_data-2 | btrfs receive /backup
最后
btrfs sub del /data/bkp_data
mv /data/bkp_data-2 /data/bkp_data
btrfs sub del /backup/bkp_data
mv /backup/bkp_data-2 /backup/bkp_data
发送到远程目录
btrfs send /data/bkp_data | ssh root@ubuntu 'btrfs receive /backup'

21. Linux

21.1. 系统服务

21.1.1. 计划任务

  • 一次性计划任务

    • 命令: at 时间

    • 查看计划任务 : at -l

    • 删除任务: atrm 任务序号

    • 非交互式创建计划任务: echo "systemctl start httpd" | at 20:30

  • 周期性计划任务

    • 命令: crontab -e

    • 查看计划任务 : crontab -l

计划任务格式为:

分 时 日 月 星期 命令

假设在每周一、三、五的凌晨3 点25 分,都需要使用tar 命令把某个网站的数据目录进行打包处理,使其作为一个备份文件

命令: 25 3 * * 1,3,5 /usr/bin/tar -zcvf backup.tar.gz /var/www

TIP

crond 服务的计划任务参数中,所有命令一定要用绝对路径的方式来写,如果不知道绝对路径,请用 whereis 命令进行查询

21.2. SSH

21.2.1. 无密钥登陆(Client)

生成密钥对
ssh-keygen -N "" -f ~/.ssh/yss

# 生成带邮箱的密钥对
ssh-keygen -N "" -f ~/.ssh/yss -C xxxx@xxx.com
上传公钥文件
ssh-copy-id -i $HOME/.ssh/yss.pub root@IP
配置SSH客户端私钥
touch ~/.ssh/config

chmod 755 ~/.ssh/config
cat << EOF >> ~/.ssh/config
Host IP
    IdentityFile ~/.ssh/yss
EOF

chmod 400 ~/.ssh/config

21.2.2. 无密钥登陆(Server)

生成密钥对
ssh-keygen -N "" -f ~/.ssh/foo -C foo

会在 ~/.ssh/ 目录下生成两个文件

  • 私钥文件 foo

  • 公钥文件 foo.pub

将公钥内容添加到authorized_keys文件中
cat ~/.ssh/foo.pub >> ~/.ssh/authorized_keys
将私钥文件发送至客户机
scp root@192.168.1.1:~/.ssh/foo ~/.ssh/
客户机配置
touch ~/.ssh/config

chmod 755 ~/.ssh/config
cat << EOF >> ~/.ssh/config
Host IP
    IdentityFile ~/.ssh/foo
EOF

chmod 400 ~/.ssh/config

21.3. 时间同步

#设置硬件时钟调整为与本地时钟一致
timedatectl set-local-rtc 1
#设置时区为上海
timedatectl set-timezone Asia/Shanghai

#安装ntpdate
yum -y install ntpdate
#同步时间
ntpdate -u  cn.ntp.org.cn
#同步完成后,date命令查看时间是否正确
date

同步时间后可能部分服务器过一段时间又会出现偏差,因此最好设置crontab来定时同步时间,方法如下

#安装crontab
yum -y install crontab
#创建crontab任务
crontab -e
#添加定时任务
*/20 * * * * /usr/sbin/ntpdate cn.ntp.org.cn > /dev/null 2>&1
#重启crontab
service crond reload

21.4. Proxychains

安装
# 获取源码
git clone https://github.com/rofl0r/proxychains-ng

# 编译和安装
cd proxychains-ng
./configure --prefix=/usr --sysconfdir=/etc
make && make install && make install-config

# 删除文件
cd .. && rm -rf proxychains-ng

22. CentOS

22.1. Python38

# 下载
wget https://www.python.org/ftp/python/3.8.1/Python-3.8.1.tgz
tar -zxvf Python-3.8.1.tgz

# 进入文件夹
cd Python-3.8.1

# 配置安装位置
./configure prefix=/usr/local/python3

# 安装
make && make install

# 添加python3的软链接
ln -s /usr/local/python3/bin/python3.8 /usr/bin/python3

# 添加 pip3 的软链接
ln -s /usr/local/python3/bin/pip3.8 /usr/bin/pip3

# 设置pip源
pip3 install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple pip
pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

22.2. PHP7

增加SCL源
yum install -y centos-release-scl
安装PHP7.2
yum install -y rh-php72 \
    rh-php72-php  \
    rh-php72-php-bcmath \
    rh-php72-php-fpm \
    rh-php72-php-gd \
    rh-php72-php-intl \
    rh-php72-php-mbstring \
    rh-php72-php-mysqlnd \
    rh-php72-php-opcache \
    rh-php72-php-pdo \
    rh-php72-php-pecl-apcu \
    rh-php72-php-xmlrpc \
    rh-php72-php-devel
进入 rh-php72 环境
scl enable rh-php72 bash
备份php.ini
cp /etc/opt/rh/rh-php72/php.ini /etc/opt/rh/rh-php72/php.ini.default
修改php.ini
# 启用 '<? ... ?>' 代码风格
sed -i '197s/short_open_tag = Off/short_open_tag = On/' /etc/opt/rh/rh-php72/php.ini

# 禁止一些危险性高的函数
sed -i '314s/disable_functions =/disable_functions = system,exec,shell_exec,passthru,set_time_limit,ini_alter,dl,openlog,syslog,readlink,symlink,link,leak,popen,escapeshellcmd,virtual,socket_create,mail,eval/' /etc/opt/rh/rh-php72/php.ini

# 配置中国时区
sed -i '902s#;date.timezone =#date.timezone = Asia/Shanghai#' /etc/opt/rh/rh-php72/php.ini
增加开机启动
systemctl enable rh-php72-php-fpm
启动 PHP-FPM 服务
systemctl start rh-php72-php-fpm
查看 PHP-FPM 服务状态
# systemctl status rh-php72-php-fpm
● rh-php72-php-fpm.service - The PHP FastCGI Process Manager
   Loaded: loaded (/usr/lib/systemd/system/rh-php72-php-fpm.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-11-29 13:36:03 CST; 1h 56min ago
 Main PID: 15360 (php-fpm)
   Status: "Processes active: 0, idle: 6, Requests: 56, slow: 0, Traffic: 0req/sec"
   CGroup: /system.slice/rh-php72-php-fpm.service
           ├─15360 php-fpm: master process (/etc/opt/rh/rh-php72/php-fpm.conf)
           ├─15361 php-fpm: pool www
           ├─15362 php-fpm: pool www
           ├─15363 php-fpm: pool www
           ├─15364 php-fpm: pool www
           ├─15365 php-fpm: pool www
           └─17211 php-fpm: pool www

Nov 29 13:36:03 iZ6weebcmroarpx8rrxscrZ systemd[1]: Starting The PHP FastCGI Process Manager...
Nov 29 13:36:03 iZ6weebcmroarpx8rrxscrZ systemd[1]: Started The PHP FastCGI Process Manager.

# ss -antpl|grep php-fpm
LISTEN     0      128    127.0.0.1:9000                     *:*                   users:(("php-fpm",pid=17211,fd=9),("php-fpm",pid=15365,fd=9),("php-fpm",pid=15364,fd=9),("php-fpm",pid=15363,fd=9),("php-fpm",pid=15362,fd=9),("php-fpm",pid=15361,fd=9),("php-fpm",pid=15360,fd=7))

22.3. Systemd

管理目录为 /usr/lib/systemd/system

22.3.1. Service

service文件
[Unit]
Description=SD Cloud Check In

[Service]
Type=simple
ExecStart=/usr/local/bin/python38 /root/workspace/66yunCheckIn/app.py

[Install]
WantedBy=multi-user.target

参数说明

[Unit]

[Unit]区块通常是配置文件的第一个区块,用来定义 Unit 的元数据,以及配置与其他 Unit 的关系

Description:简短描述
Documentation:文档地址
Requires:当前 Unit 依赖的其他 Unit,如果它们没有运行,当前 Unit 会启动失败
Wants:与当前 Unit 配合的其他 Unit,如果它们没有运行,当前 Unit 不会启动失败
BindsTo:与Requires类似,它指定的 Unit 如果退出,会导致当前 Unit 停止运行
Before:如果该字段指定的 Unit 也要启动,那么必须在当前 Unit 之后启动
After:如果该字段指定的 Unit 也要启动,那么必须在当前 Unit 之前启动
Conflicts:这里指定的 Unit 不能与当前 Unit 同时运行
Condition...:当前 Unit 运行必须满足的条件,否则不会运行
Assert...:当前 Unit 运行必须满足的条件,否则会报启动失败

[Service]

[Service]区块用来 Service 的配置,只有 Service 类型的 Unit 才有这个区块

Type:定义启动时的进程行为。它有以下几种值。
Type=simple:默认值,执行ExecStart指定的命令,启动主进程
Type=forking:以 fork 方式从父进程创建子进程,创建后父进程会立即退出
Type=oneshot:一次性进程,Systemd 会等当前服务退出,再继续往下执行
Type=dbus:当前服务通过D-Bus启动
Type=notify:当前服务启动完毕,会通知Systemd,再继续往下执行
Type=idle:若有其他任务执行完毕,当前服务才会运行
ExecStart:启动当前服务的命令
ExecStartPre:启动当前服务之前执行的命令
ExecStartPost:启动当前服务之后执行的命令
ExecReload:重启当前服务时执行的命令
ExecStop:停止当前服务时执行的命令
ExecStopPost:停止当其服务之后执行的命令
RestartSec:自动重启当前服务间隔的秒数
Restart:定义何种情况 Systemd 会自动重启当前服务,可能的值包括always(总是重启)、on-success、on-failure、on-abnormal、on-abort、on-watchdog
TimeoutSec:定义 Systemd 停止当前服务之前等待的秒数
Environment:指定环境变量

[Install]

[Install]通常是配置文件的最后一个区块,用来定义如何启动,以及是否开机启动

WantedBy:它的值是一个或多个 Target,当前 Unit 激活时(enable)符号链接会放入/etc/systemd/system目录下面以 Target 名 + .wants后缀构成的子目录中
RequiredBy:它的值是一个或多个 Target,当前 Unit 激活时,符号链接会放入/etc/systemd/system目录下面以 Target 名 + .required后缀构成的子目录中
Alias:当前 Unit 可用于启动的别名
Also:当前 Unit 激活(enable)时,会被同时激活的其他 Unit

22.3.2. Timer

Timer文件
[Unit]
Description=SD Cloud CheckIn

[Timer]
OnCalendar=*-*-* 09:00:00

[Install]
WantedBy=multi-user.target

参数说明

[Timer]

[Timer]部分定制定时器

OnActiveSec:定时器生效后,多少时间开始执行任务
OnBootSec:系统启动后,多少时间开始执行任务
OnStartupSec:Systemd 进程启动后,多少时间开始执行任务
OnUnitActiveSec:该单元上次执行后,等多少时间再次执行
OnUnitInactiveSec: 定时器上次关闭后多少时间,再次执行
OnCalendar:基于绝对时间,而不是相对时间执行
AccuracySec:如果因为各种原因,任务必须推迟执行,推迟的最大秒数,默认是60秒
Unit:真正要执行的任务,默认是同名的带有.service后缀的单元
Persistent:如果设置了该字段,即使定时器到时没有启动,也会自动执行相应的单元
WakeSystem:如果系统休眠,是否自动唤醒系统

OnCalendar详细说明

    minutely → *-*-* *:*:00
      hourly → *-*-* *:00:00
       daily → *-*-* 00:00:00
     monthly → *-*-01 00:00:00
      weekly → Mon *-*-* 00:00:00
      yearly → *-01-01 00:00:00
   quarterly → *-01,04,07,10-01 00:00:00
semiannually → *-01,07-01 00:00:00

22.3.3. 使用

启动service
# 修改服务文件后重载服务
systemctl daemon-reload
# 启动 checkin.service
systemctl start checkin
# 查看服务状态
systemctl status checkin
# 增加开机自启
systemctl enable checkin
启动timer
# 启动 checkin.timer
systemctl start checkin.timer
# 查看所有已启用的定时器
systemctl list-timers
# 增加开机自启
systemctl enable checkin.timer
# 重启 checkin.timer
systemctl restart checkin.timer

22.4. CentOS9初始化

dnf install -y iproute rsync epel-release vim-enhanced wget curl

dnf install -y dnf-plugins-core
dnf config-manager --set-enabled crb


#禁用SELINUX,必须重启才能生效
echo SELINUX=disabled > /etc/selinux/config
echo SELINUXTYPE=targeted >> /etc/selinux/config

#最大可以打开的文件
echo "*               soft   nofile            65535" >> /etc/security/limits.conf
echo "*               hard   nofile            65535" >> /etc/security/limits.conf

# ssh登录时,登录ip被会反向解析为域名,导致ssh登录缓慢
sed -i "s/#UseDNS yes/UseDNS no/" /etc/ssh/sshd_config
sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/" /etc/ssh/sshd_config
sed -i "s/GSSAPICleanupCredentials yes/GSSAPICleanupCredentials no/" /etc/ssh/sshd_config
sed -i "s/#MaxAuthTries 6/MaxAuthTries 10/" /etc/ssh/sshd_config
# server每隔30秒发送一次请求给client,然后client响应,从而保持连接
sed -i "s/#ClientAliveInterval 0/ClientAliveInterval 30/" /etc/ssh/sshd_config
# server发出请求后,客户端没有响应得次数达到3,就自动断开连接,正常情况下,client不会不响应
sed -i "s/#ClientAliveCountMax 3/ClientAliveCountMax 10/" /etc/ssh/sshd_config

# 支持gbk文件显示
echo "set fencs=utf-8,gbk" >> /etc/vimrc

# 设定系统时区
yes|cp /usr/share/zoneinfo/Asia/Chongqing /etc/localtime

# 时间同步
dnf install -y systemd-timesyncd
systemctl enable systemd-timesyncd
systemctl start systemd-timesyncd

# 如果是x86_64系统,排除32位包
echo "exclude=*.i386 *.i586 *.i686" >> /etc/yum.conf

systemctl stop firewalld
systemctl disable firewalld
systemctl mask firewalld

dnf install iptables iptables-services iptables-utils -y

# 解决无法高版本SSH无法连接低版本SSH问题
update-crypto-policies --set LEGACY

# 创建swap分区
dd if=/dev/zero of=/var/.swapfile count=16 bs=1G
chmod 600 /var/.swapfile
mkswap /var/.swapfile
swapon /var/.swapfile
echo "/var/.swapfile swap swap defaults 0 0" >> /etc/fstab

22.5. NetworkManager

设置IP地址
nmcli c modify enp1s0 ipv4.addresses 192.168.122.120/24 ipv4.method manual
nmcli c modify enp1s0 ipv4.gateway 192.168.122.1
nmcli c modify enp1s0 ipv6.method ignore
nmcli c modify enp1s0 ipv4.dns "223.5.5.5 223.6.6.6"
nmcli c modify enp1s0 connection.autoconnect yes
nmcli c modify eno1 ethernet.cloned-mac-address 90:b1:1c:4f:d9:e5
修改网卡名
nmcli c modify 'enp1s0' connection.id eno1
nmcli c reload
nmcli c up eno2
修改设备网卡名
nmcli c modify eno1 connection.interface-name eno1

创建网桥

  • 为了防止创建网桥失败,主机断网,在创建之前开启恢复脚本,网桥创建成功手动kill掉

# 增加恢复脚本
cat <<EOF> update_network_delayed.sh
#!/bin/bash

sleep 120

nmcli c delete br1
nmcli c delete eno1

nmcli c add type ethernet autoconnect yes con-name eno1 ifname eno1
nmcli c modify eno1 ipv4.addresses 221.236.30.3/24 ipv4.method manual
nmcli c modify eno1 ipv4.gateway 221.236.30.1
nmcli c modify eno1 ipv4.dns "61.139.2.69,223.5.5.5"

nmcli c modify eno1 ethernet.cloned-mac-address 90:b1:1c:4f:d9:e5

nmcli c up eno1

echo "Network configuration has been updated."

EOF
# 创建br1网桥
nmcli c add type bridge autoconnect yes con-name br1 ifname br1
nmcli c modify br1 bridge.stp no
nmcli c modify br1 ipv6.method ignore
nmcli c modify br1 ipv4.addresses 221.236.30.3/24 ipv4.method manual
nmcli c modify br1 ipv4.gateway 221.236.30.1
nmcli c modify br1 ipv4.dns "223.5.5.5 223.6.6.6"
cat <<EOF> br1.sh
#!/bin/bash

nmcli c delete eno1
nmcli c add type ethernet autoconnect yes con-name eno1 ifname eno1 master br1
nmcli c up br1
EOF
# 先开启恢复脚本,再创建网桥
sh update_network_delayed.sh &
sh br1.sh

23. Java

23.1. Java_web

23.1.1. JDK

源码包安装
创建目录
mkdir /usr/java
解压文件
tar -xf jdk-11.0.15.1_linux-x64_bin.tar.gz -C /usr/java
设置环境变量
cat <<EOF>> /etc/profile
# set java environment
export JAVA_HOME=/usr/java/jdk-11.0.15.1
export CLASSPATH=\$JAVA_HOME/lib/tools.jar:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib
export PATH=\$JAVA_HOME/bin:\$PATH
EOF
加载环境变量
source /etc/profile
查看Java版本号
[yss@master-2022 workspace]# java -version
java version "11.0.15.1" 2022-04-22 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.15.1+2-LTS-10)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.15.1+2-LTS-10, mixed mode)

23.1.2. Tomcat

源码包安装
wget https://mirrors.aliyun.com/apache/tomcat/tomcat-8/v8.5.78/bin/apache-tomcat-8.5.78.tar.gz
解压文件夹
tar -xf apache-tomcat-8.5.78.tar.gz
重命名Tomcat目录
mv apache-tomcat-8.5.78 /usr/local/tomcat/
设置文件的所属用户
useradd www
chown -R www.www /usr/local/tomcat/
/usr/local/tomcat/ 目录目录说明
bin:存放Tomcat的一些脚本文件,包含启动和关闭Tomcat服务脚本。
conf:存放Tomcat服务器的各种全局配置文件,其中最重要的是server.xml和web.xml。
webapps:Tomcat的主要Web发布目录,默认情况下把Web应用文件放于此目录。
logs:存放Tomcat执行时的日志文件。
备份默认文件
mv /usr/local/tomcat/conf/server.xml /usr/local/tomcat/conf/server.xml_bak
修改配置文件
cat <<EOF> /usr/local/tomcat/conf/server.xml
<?xml version="1.0" encoding="UTF-8"?>
<Server port="8006" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"/>
<Listener className="org.apache.catalina.core.AprLifecycleListener"/>
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container"
 type="org.apache.catalina.UserDatabase"
 description="User database that can be updated and saved"
 factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
 pathname="conf/tomcat-users.xml"/>
</GlobalNamingResources>
<Service name="Catalina">
<Connector port="8080"
 protocol="HTTP/1.1"
 connectionTimeout="20000"
 redirectPort="8443"
 maxThreads="1000"
 minSpareThreads="20"
 acceptCount="1000"
 maxHttpHeaderSize="65536"
 debug="0"
 disableUploadTimeout="true"
 useBodyEncodingForURI="true"
 enableLookups="false"
 URIEncoding="UTF-8"/>
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
  resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="/data/wwwroot/default" unpackWARs="true" autoDeploy="true">
<Context path="" docBase="/data/wwwroot/default" debug="0" reloadable="false" crossContext="true"/>
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" />
</Host>
</Engine>
</Service>
</Server>
EOF
设置JVM内存参数
cat <<EOF> /usr/local/tomcat/bin/setenv.sh
JAVA_OPTS='-Djava.security.egd=file:/dev/./urandom -server -Xms256m -Xmx496m -Dfile.encoding=UTF-8'
EOF
增加系统服务进程
cat <<EOF > /lib/systemd/system/tomcat.service
[Unit]
Description=Tomcat
After=syslog.target

[Service]
Type=forking
ExecStart=/usr/local/tomcat/bin/catalina.sh start
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
User=root
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
启动tomcat服务
systemctl start tomcat
设置开启自启
systemctl enable tomcat
查看tomcat状态
[root@master-2022 bin]# ss -antpl|grep 8080
LISTEN     0      128          *:8080                     *:*                   users:(("java",pid=10900,fd=40))

[root@master-2022 bin]# systemctl status tomcat
● tomcat.service - Tomcat
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-05-25 21:01:04 CST; 5s ago
  Process: 13510 ExecStart=/usr/local/tomcat/bin/catalina.sh start (code=exited, status=0/SUCCESS)
 Main PID: 13522 (java)
   CGroup: /system.slice/tomcat.service
           └─13522 /usr/bin/java -Djava.util.logging.config.file=/usr...

May 25 21:01:04 master-2022 systemd[1]: Starting Tomcat...
May 25 21:01:04 master-2022 catalina.sh[13510]: Tomcat started.
May 25 21:01:04 master-2022 systemd[1]: Started Tomcat.
JavaWeb环境测试
# 创建网站根目录
mkdir -p /data/wwwroot/default
echo 'Tomcat test' > /data/wwwroot/default/index.jsp
chown -R www.www /data/wwwroot
配置nginx配置文件
cat << EOF > /etc/nginx/conf.d/tomcat.conf
upstream nameserver {
    server 10.0.2.20:8080;
}
server{
    listen       9980 default_server;
    server_name  tomcat.master.com;
    location / {
    proxy_pass http://nameserver;
    proxy_set_header Host \$host;
    }
}
EOF

# 重启nginx
nginx -t && nginx -s reload
测试
[root@master-2022 bin]# curl 127.0.0.1:9980
HTTP/1.1 200
Set-Cookie: JSESSIONID=68ABB16FBD9C3E8C65A2C89668EDEF74; Path=/; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 12
Date: Wed, 25 May 2022 13:03:21 GMT

Tomcat test

24. PostgreSQL

25. Let’s Encrypt

26. Tomcat

26.1. 安装

26.1.1. 源码包安装

wget https://mirrors.aliyun.com/apache/tomcat/tomcat-8/v8.5.78/bin/apache-tomcat-8.5.78.tar.gz
解压文件夹
tar -xf apache-tomcat-8.5.78.tar.gz
重命名Tomcat目录
mv apache-tomcat-8.5.78 /usr/local/tomcat/
设置文件的所属用户
useradd www
chown -R www.www /usr/local/tomcat/
/usr/local/tomcat/ 目录目录说明
bin:存放Tomcat的一些脚本文件,包含启动和关闭Tomcat服务脚本。
conf:存放Tomcat服务器的各种全局配置文件,其中最重要的是server.xml和web.xml。
webapps:Tomcat的主要Web发布目录,默认情况下把Web应用文件放于此目录。
logs:存放Tomcat执行时的日志文件。
备份默认文件
mv /usr/local/tomcat/conf/server.xml /usr/local/tomcat/conf/server.xml_bak
修改配置文件
cat <<EOF> /usr/local/tomcat/conf/server.xml
<?xml version="1.0" encoding="UTF-8"?>
<Server port="8006" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"/>
<Listener className="org.apache.catalina.core.AprLifecycleListener"/>
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container"
 type="org.apache.catalina.UserDatabase"
 description="User database that can be updated and saved"
 factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
 pathname="conf/tomcat-users.xml"/>
</GlobalNamingResources>
<Service name="Catalina">
<Connector port="8080"
 protocol="HTTP/1.1"
 connectionTimeout="20000"
 redirectPort="8443"
 maxThreads="1000"
 minSpareThreads="20"
 acceptCount="1000"
 maxHttpHeaderSize="65536"
 debug="0"
 disableUploadTimeout="true"
 useBodyEncodingForURI="true"
 enableLookups="false"
 URIEncoding="UTF-8"/>
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
  resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="/data/wwwroot/default" unpackWARs="true" autoDeploy="true">
<Context path="" docBase="/data/wwwroot/default" debug="0" reloadable="false" crossContext="true"/>
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" />
</Host>
</Engine>
</Service>
</Server>
EOF
设置JVM内存参数
cat <<EOF> /usr/local/tomcat/bin/setenv.sh
JAVA_OPTS='-Djava.security.egd=file:/dev/./urandom -server -Xms256m -Xmx496m -Dfile.encoding=UTF-8'
EOF
增加系统服务进程
cat <<EOF > /lib/systemd/system/tomcat.service
[Unit]
Description=Tomcat
After=syslog.target

[Service]
Type=forking
ExecStart=/usr/local/tomcat/bin/catalina.sh start
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
User=root
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
启动tomcat服务
systemctl start tomcat
设置开启自启
systemctl enable tomcat
查看tomcat状态
[root@master-2022 bin]# ss -antpl|grep 8080
LISTEN     0      128          *:8080                     *:*                   users:(("java",pid=10900,fd=40))

[root@master-2022 bin]# systemctl status tomcat
● tomcat.service - Tomcat
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-05-25 21:01:04 CST; 5s ago
  Process: 13510 ExecStart=/usr/local/tomcat/bin/catalina.sh start (code=exited, status=0/SUCCESS)
 Main PID: 13522 (java)
   CGroup: /system.slice/tomcat.service
           └─13522 /usr/bin/java -Djava.util.logging.config.file=/usr...

May 25 21:01:04 master-2022 systemd[1]: Starting Tomcat...
May 25 21:01:04 master-2022 catalina.sh[13510]: Tomcat started.
May 25 21:01:04 master-2022 systemd[1]: Started Tomcat.
JavaWeb环境测试
# 创建网站根目录
mkdir -p /data/wwwroot/default
echo 'Tomcat test' > /data/wwwroot/default/index.jsp
chown -R www.www /data/wwwroot
配置nginx配置文件
cat << EOF > /etc/nginx/conf.d/tomcat.conf
upstream nameserver {
    server 10.0.2.20:8080;
}
server{
    listen       9980 default_server;
    server_name  tomcat.master.com;
    location / {
    proxy_pass http://nameserver;
    proxy_set_header Host \$host;
    }
}
EOF

# 重启nginx
nginx -t && nginx -s reload
测试
[root@master-2022 bin]# curl 127.0.0.1:9980
HTTP/1.1 200
Set-Cookie: JSESSIONID=68ABB16FBD9C3E8C65A2C89668EDEF74; Path=/; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 12
Date: Wed, 25 May 2022 13:03:21 GMT

Tomcat test

26.2. 使用

JavaWeb环境测试
# 创建网站根目录
mkdir -p /data/wwwroot/default
echo 'Tomcat test' > /data/wwwroot/default/index.jsp
chown -R www.www /data/wwwroot
配置nginx配置文件
cat << EOF > /etc/nginx/conf.d/tomcat.conf
upstream nameserver {
    server 10.0.2.20:8080;
}
server{
    listen       9980 default_server;
    server_name  tomcat.master.com;
    location / {
    proxy_pass http://nameserver;
    proxy_set_header Host \$host;
    }
}
EOF

# 重启nginx
nginx -t && nginx -s reload
测试
[root@master-2022 bin]# curl 127.0.0.1:9980
HTTP/1.1 200
Set-Cookie: JSESSIONID=68ABB16FBD9C3E8C65A2C89668EDEF74; Path=/; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 12
Date: Wed, 25 May 2022 13:03:21 GMT

Tomcat test

27. PureFTPd

27.1. 安装

27.1.1. CentOS

安装
yum install -y epel-release

yum install -y pure-ftpd
配置
cat << EOF > /etc/pure-ftpd/pure-ftpd.conf
AllowAnonymousFXP no
AllowUserFXP no
AnonymousCanCreateDirs no
AnonymousCantUpload yes
AnonymousOnly no
AntiWarez yes
AutoRename no
BrokenClientsCompatibility yes
ChrootEveryone yes
CreateHomeDir no
CustomerProof no
Daemonize yes
DisplayDotFiles yes
DontResolve yes
IPV4Only yes
LimitRecursion 10000 8
MaxClientsNumber 200
MaxClientsPerIP 8
MaxDiskUsage 99
MaxIdleTime 15
MaxLoad 4
MinUID 45
PureDB /etc/pure-ftpd/pureftpd.pdb
NoAnonymous yes
NoChmod no
ProhibitDotFilesRead no
ProhibitDotFilesWrite no
SyslogFacility ftp
Umask 077:077
VerboseLog no
PassivePortRange 52000 52050
#加密通信
#0代表明文,默认值
#2代表控制链接加密但数据链接不加密
#3代表所有链接都加密
TLS 0
EOF

systemctl start pure-ftpd
systemctl enable pure-ftpd


mkdir -p /var/ftp

groupadd ftpgroup

useradd ftpuser -d /var/ftp -G ftpgroup -s /bin/false
新增FTP用户
yum install -y pwgen
生成FTP随机密码
ftp_password=`pwgen -s 20`
echo $ftp_password> pw.txt
echo $ftp_password>> pw.txt
cat pw.txt

pure-pw useradd ftptest -u ftpuser -g ftpgroup -d /var/ftp -m < pw.txt
rm -f pw.txt
pure-pw mkdb
pure-pw show ftptest

chown -R ftptest:ftpuser /var/ftp
测试
yum install -y lftp

lftp  -u ftptest,XKeJVhlCXfHdQmessy4f localhost <<EOF
mkdir test
ls
rmdir test
ls
quit
EOF
日志配置
vim /etc/rsyslog.conf

*.info;mail.none;authpriv.none;cron.none;ftp.none  /var/log/messages
# Pure-FTPd日志
ftp.*                              -/var/log/pureftpd/pureftpd.log



mkdir -p /var/log/pureftpd/ && touch /var/log/pureftpd/pureftpd.log

systemctl restart rsyslog.service
日志轮替
echo <<EOF> /etc/logrotate.d/pureftpd
/var/log/pureftpd/pureftpd.log {
        monthly
        missingok
        create
        compress
        dateext
	    rotate 1
	    create 640 root root
        sharedscripts
        postrotate
        /bin/kill -HUP $(/bin/cat /var/run/syslogd.pid 2>/dev/null) &>/dev/null
        endscript
}
EOF

27.1.2. Ubuntu

安装
apt-get install pure-ftpd
配置
mkdir -p /var/ftp

groupadd ftpgroup

useradd ftpuser -d /var/ftp -G ftpgroup -s /bin/false

pure-pw useradd ftptest -u ftpuser -g ftpgroup -d /var/ftp

pure-pw mkdb

cd /etc/pure-ftpd/auth && ln -s /etc/pure-ftpd/conf/PureDB puredb

echo yes > /etc/pure-ftpd/conf/Daemonize
echo yes > /etc/pure-ftpd/conf/NoAnonymous
echo yes > /etc/pure-ftpd/conf/ChrootEveryone
echo yes > /etc/pure-ftpd/conf/IPV4Only
echo yes > /etc/pure-ftpd/conf/ProhibitDotFilesWrite
echo yes > /etc/pure-ftpd/conf/BrokenClientsCompatibility
echo 50 > /etc/pure-ftpd/conf/MaxClientsNumber
echo 5 > /etc/pure-ftpd/conf/MaxClientsPerIP
echo no > /etc/pure-ftpd/conf/VerboseLog
echo yes > /etc/pure-ftpd/conf/DisplayDotFiles
echo yes > /etc/pure-ftpd/conf/NoChmod
echo no > /etc/pure-ftpd/conf/AnonymousOnly
echo no > /etc/pure-ftpd/conf/PAMAuthentication
echo no > /etc/pure-ftpd/conf/UnixAuthentication
echo /etc/pure-ftpd/pureftpd.pdb > /etc/pure-ftpd/conf/PureDB
echo yes > /etc/pure-ftpd/conf/DontResolve
echo 15 > /etc/pure-ftpd/conf/MaxIdleTime
echo 2000 8 > /etc/pure-ftpd/conf/LimitRecursion
echo yes > /etc/pure-ftpd/conf/AntiWarez
echo no > /etc/pure-ftpd/conf/AnonymousCanCreateDirs
echo 4 > /etc/pure-ftpd/conf/MaxLoad
echo no > /etc/pure-ftpd/conf/AllowUserFXP
echo no > /etc/pure-ftpd/conf/AllowAnonymousFXP
echo no > /etc/pure-ftpd/conf/AutoRename
echo yes > /etc/pure-ftpd/conf/AnonymousCantUpload
echo yes > /etc/pure-ftpd/conf/NoChmod
echo 80 > /etc/pure-ftpd/conf/MaxDiskUsage
echo yes > /etc/pure-ftpd/conf/CustomerProof
echo 0 > /etc/pure-ftpd/conf/TLS
echo 45 > /etc/pure-ftpd/conf/MinUID

systemctl start pure-ftpd
systemctl enable pure-ftpd
日志配置
vim /etc/rsyslog.d/50-default.conf

auth,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none;ftp.none         -/var/log/syslog
#cron.*                         /var/log/cron.log
#daemon.*                       -/var/log/daemon.log
kern.*                          -/var/log/kern.log
#lpr.*                          -/var/log/lpr.log
mail.*                          -/var/log/mail.log
#user.*                         -/var/log/user.log
ftp.*                           -/var/log/pureftpd/pureftpd.log


mkdir -p /var/log/pureftpd/ && touch /var/log/pureftpd/pureftpd.log

systemctl restart rsyslog.service
日志轮替
echo <<EOF> /etc/logrotate.d/pureftpd
/var/log/pureftpd/pureftpd.log {
	su syslog adm
        monthly
        missingok
        create
        compress
        dateext
	    rotate 1
	    create 640 syslog adm
        sharedscripts
        postrotate
        /bin/kill -HUP $(/bin/cat /var/run/rsyslogd.pid 2>/dev/null) &>/dev/null
        endscript
}
EOF

27.2. 使用

JavaWeb环境测试
# 创建网站根目录
mkdir -p /data/wwwroot/default
echo 'Tomcat test' > /data/wwwroot/default/index.jsp
chown -R www.www /data/wwwroot
配置nginx配置文件
cat << EOF > /etc/nginx/conf.d/tomcat.conf
upstream nameserver {
    server 10.0.2.20:8080;
}
server{
    listen       9980 default_server;
    server_name  tomcat.master.com;
    location / {
    proxy_pass http://nameserver;
    proxy_set_header Host \$host;
    }
}
EOF

# 重启nginx
nginx -t && nginx -s reload
测试
[root@master-2022 bin]# curl 127.0.0.1:9980
HTTP/1.1 200
Set-Cookie: JSESSIONID=68ABB16FBD9C3E8C65A2C89668EDEF74; Path=/; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 12
Date: Wed, 25 May 2022 13:03:21 GMT

Tomcat test

28. Harbor

28.1. 安装

下载
wget -c https://github.com/goharbor/harbor/releases/download/v2.3.4/harbor-offline-installer-v2.3.4.tgz
解压
tar xf harbor-offline-installer-v2.3.4.tgz -C /usr/local
修改配置
# 进入文件夹
cd /usr/local/harbor
# 备份默认配置文件
cp -a harbor.yml.tmpl harbor.yml

# 修改地址
sed -i '5s/reg.mydomain.com/192.168.0.150/' harbor.yml

# 注释https
sed -i '13s/^/#/' harbor.yml
sed -i '15s/^/#/' harbor.yml
sed -i '17s/^/#/' harbor.yml
sed -i '18s/^/#/' harbor.yml

# 查看harbor默认密码
sed -n '34,34p' harbor.yml


# 安装
./install.sh


# 修改配置daemon文件
sed -i '2s/$/,/' /etc/docker/daemon.json
sed -i '2 a \"insecure-registries\":[\"192.168.0.150:80\"]' /etc/docker/daemon.json

# 重启docker
systemctl daemon-reload
systemctl restart docker

28.2. 使用

推送镜像
# 登陆docker仓库
docker login -u admin -p Harbor12345 192.168.0.150:80

# 修改镜像格式
harbor地址/仓库名/镜像名:版本号

docker tag 64f770dda7e1 192.168.0.150:80/repo/mytest:v2.0.0

# 推送镜像
docker push 192.168.0.150:80/repo/mytest:v2.0.0

# 拉取镜像
docker pull 192.168.0.150:80/repo/mytest:v2.0.0

28.3. Nginx反代Harbor

28.3.1. Nginx设置

cat <<EOF> /etc/nginx/conf.d/harbor.ylighgh.xyz.conf

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name harbor.ylighgh.xyz;
    root /data/web/ylighgh.xyz;

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    ssl_certificate /etc/letsencrypt/live/ylighgh.xyz/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ylighgh.xyz/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
    ssl_dhparam /etc/letsencrypt/live/ylighgh.xyz/dhparam.pem;

    # intermediate configuration. tweak to your needs.
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
    ssl_prefer_server_ciphers on;

    # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
    add_header Strict-Transport-Security max-age=15768000;

    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
    # nginx: Specifies a file with trusted CA certificates in the PEM format used to verify client certificates and OCSP responses if ssl_stapling is enabled.
    # certbot: If you’re using OCSP stapling with Nginx >= 1.3.7, chain.pem should be provided as the ssl_trusted_certificate to validate OCSP responses.
    ssl_trusted_certificate /etc/letsencrypt/live/ylighgh.xyz/chain.pem;
    client_max_body_size    0;

    location / {
        proxy_pass http://192.168.10.237:81;
    }

    location /v2/ {

    proxy_pass http://192.168.10.237:81/v2/;
    proxy_set_header Host \$host;
    proxy_set_header X-Real-IP \$remote_addr;
    proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto \$scheme;
    proxy_buffering off;
    proxy_request_buffering off;

    }
}
EOF

28.3.2. 重载Nginx

nginx -s reload

29. buildkit

由于 k8s 1.24以后默认使用containerd作为容器运行环境

因此需要安装buildkit配置nerdctl使用

29.1. 安装

下载
wget -c https://github.com/moby/buildkit/releases/download/v0.12.2/buildkit-v0.12.2.linux-amd64.tar.gz
解压
tar xf buildkit-v0.12.2.linux-amd64.tar.gz -C /usr/local
配置systemd
cat <<EOF> /etc/systemd/system/buildkit.service
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true

[Install]
WantedBy=multi-user.target
EOF
重载systemd
systemctl daemon-reload
启动buildkit
systemctl start buildkit
设为开机自动
systemctl enable buildkit

30. kubectl

30.1. 安装

增加镜像源
cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF
安装
yum install -y kubectl

30.2. 使用

获取namespaces
kubectl get namespaces

kubectl get ns
获取指定命名空间depolyments
kubectl get deployments.apps -n kube-ops

# 输出更详细的信息
kubectl get deployments.apps -n kube-ops -o wide

kubectl get deploy -n kube-ops -o wide
格式化输出
# 只获取deployments名称
kubectl get deployments.apps -o go-template --template='{{ range .items}}{{printf "%s\n"  .metadata.name }}{{ end }}' -n kube-ops

# 获取deployments名称和副本数
kubectl get deployments.apps -o go-template --template='{{ range .items}}{{printf "%s %d\n"  .metadata.name .spec.replicas }}{{ end }}' -n kube-ops

# 获取大于指定副本数的deployments名称
kubectl get deployments.apps -o go-template --template='{{range .items}}{{if gt .spec.replicas 0}}{{.metadata.name}} {{.spec.replicas}}{{"\n"}}{{end}}{{end}}' -n kube-ops
创建service
# expose将一个资源包括Pod、Service、Deployment等公开为一个新的Service
kubectl expose deployment deployname --port=81 --type=NodePort --target-port=80 --name=service-name

给deployname发布一个服务,--port为服务暴露出去的端口,--type为服务类型,--target-port为服务对应后端Pod的端口,port提供了集群内部访问服务的入口,即ClusterIP:port。

修改镜像
# 将一个deployname的image改为镜像为1.0的image
kubectl set image deploy deployname containername=containername:1.0
删除资源
kubectl delete po podname --now
回滚镜像
# 回滚到上个版本
kubectl rollout undo deployment <deployment-name>

# 回滚到指定版本
kubectl rollout history deployment <deployment-name>
kubectl rollout undo deployment <deployment-name> --to-revision=<revision-number>
扩缩容
kubectl scale deployment deployname --replicas=newnumber
重启应用
kubectl rollout restart deployment deployname
设置环境变量
kubectl set env deployment/hephaestus KEY=VALUE -n pro