基础环境安装
一、Docker安装
sudo yum update -ysudo vi /etc/yum.repos.d/docker-ce.repo
md
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpgsudo yum installl -y docker-ce docker-ce-cli containerd.iodocker --version- 编辑Docker配置文件
sudo vi /etc/docker/daemon.json
yaml
{
"registry-mirrors": ["https://sczyqqhj.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts":{
"max-size":"100m"
},
"storage-driver":"overlay2",
# 需要改成Harbor的地址和端口
"insecure-registries": ["10.0.5.68:8088"]
}sudo systemctl restart docker
二、K8S安装
提示
说明:如果之前没有安装过K8S,那么1,2,3步骤可以直接忽略
1. 停止所有K8S服务
systemctl stop kubelet
2. 卸载现有的 Kubernetes 组件:
yum remove -y kubeadm kubectl kubelet kubernetes-cni
3. 清理残留数据
rm -rf /etc/kubernetes /var/lib/kubelet /var/lib/etcd
4. 基础环境设置
yaml
#各个机器设置自己的域名
hostnamectl set-hostname xxxx
#将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
10.0.9.100 是主节点IP ,需要在每台服务器上配置ip和hostname
echo "10.0.9.100 cluster-endpoint" >> /etc/hosts
10.0.9.100 zymdm-app01
10.0.9.101 zymdm-app02
10.0.9.102 zymdm-app03
#在主节点执行命令,进行打标签(其中master-node-name是主节点的hostname,worker-node-name是从节点hostname)
kubectl label node <master-node-name> role=master
kubectl label node <worker-node-name> role=worker5. 安装指定版本的组件(kubelet、kubeadm、kubectl):使用以下命令安装 1.23.17 版本
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
yaml
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOFsudo yum install -y kubeadm-1.23.17 kubelet-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
6. 初始化集群
6.1 拉取必要镜像
dockerfile
从远程harbor拉取image( pull之前需要先登录到harbor上,命令:docker login 10.0.5.68:8088,然后输入用户名和密码 )
docker pull 10.0.5.68:8088/k8s_containers/kube-apiserver:v1.23.17
docker pull 10.0.5.68:8088/k8s_containers/kube-scheduler:v1.23.17
docker pull 10.0.5.68:8088/k8s_containers/kube-controller-manager:v1.23.17
docker pull 10.0.5.68:8088/k8s_containers/kube-proxy:v1.23.17
docker pull 10.0.5.68:8088/k8s_containers/pause:3.6
docker pull 10.0.5.68:8088/k8s_containers/etcd:3.5.6-0
docker pull 10.0.5.68:8088/k8s_containers/coredns:v1.8.66.2 节点初始化
shell
kubeadm init \
--apiserver-advertise-address=10.0.5.68 \
--control-plane-endpoint=cluster-endpoint \
--image-repository 10.0.5.68:8088/k8s_containers \
--kubernetes-version v1.23.17 \
--service-cidr=10.97.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
-----说明-----
10.0.5.68是master节点的ip
10.0.5.68:8088/k8s_containers是私有化部署的harbor的地址和项目名称
v1.23.17是k8s的版本,也就是10.0.5.68:8088/k8s_containers中的控制平面组件的版本,即:
kube-apiserver:v1.23.17、kube-scheduler:v1.23.17、kube-controller-manager:v1.23.17、kube-proxy:v1.23.17、pause:3.6、etcd:3.5.6-0、oredns:v1.8.66.3 设置 ./kube/config(只需要在Master节点执行)
shell
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config6.4 把生成的代码,在worker节点都分别执行下
shell
kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
--discovery-token-ca-cert-hashsha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae36.5 worker节点需要先把kube-proxy 、corndns 、pause这三个组件拉到服务器上
shell
docker login 10.0.5.68:8088
docker pull 10.0.5.68:8088/k8s_containers/pause:3.6
docker pull 10.0.5.68:8088/k8s_containers/coredns:v1.8.6
docker pull 10.0.5.68:8088/k8s_containers/kube-proxy:v1.23.176.6 calico需要的相关image已经放到docker中,对应的calico文件也已经修改了,如果从镜像源拉取容易失败
信息
测试环境服务器已经有了下边三个image的tar包,只要把tar包放到服务器,然后分别执行:docker load -i calico-node.tar 、docker load -i calico-cni.tar 、docker load -i calico-kube-controllers.tar 即可生成下边三个image
shell
calico/kube-controllers:v3.25.0
calico/cni:v3.25.0
calico/node:v3.25.06.7 安装网络组件Calico(已经修改了这个文件,把文件放到了/home/jlbyw下边)
只需要在master节点执行 kubectl apply -f calico.yaml
三、Kuboard安装
shell
sudo docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 80:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://10.0.9.103:80" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3
# 请不要使用 127.0.0.1 或者 localhost 作为内网 IP \
# Kuboard 不需要和 K8S 在同一个网段,Kuboard Agent 甚至可以通过代理访问 Kuboard Server \
# KUBOARD_ENDPOINT 为内网IP四、Jenkins安装
以下是Docker方式安装,如果通过war启动也可。
shell
docker run --name jenkins-instance -d -p 8089:8080 -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker -v /data/jenkins_home:/var/jenkins_home --group-add 993 jenkins/jenkins:2.493五、Redis安装
1. Redis下载安装
shell
sudo yum update -y
sudo yum install epel-release -y
sudo yum install redis -y
sudo systemctl start redis
sudo systemctl enable redis
sudo systemctl status redis
redis-cli ping
sudo vi /etc/redis.conf
bind 0.0.0.0
requirepass your_password
sudo systemctl restart redis
sudo firewall-cmd --zone=public --add-port=6379/tcp --permanent
sudo firewall-cmd --reload2. 修改Redis的data目录
shell
ls -ld /data
sudo chown redis:redis /data
sudo chmod 770 /data
ps aux | grep redis
mount | grep /data
如果挂载为 ro(只读),需要重新挂载为读写:sudo mount -o remount,rw /data六、RabbitMQ安装
md
1. 先把erlang和rabbitmq的rpm包放到服务器上
2. 安装erlang:sudo rpm -ivh erlang-*.rpm
3. 安装rabbitmq:sudo rpm -ivh rabbitmq-server-*.rpm
4. 启动并启用rabbitmq服务:
sudo systemctl start rabbitmq-server
sudo systemctl enable rabbitmq-server
5. 检查服务状态:sudo systemctl status rabbitmq-server
6. 管理插件:sudo rabbitmq-plugins enable rabbitmq_management
7. 创建一个新的管理员用户:sudo rabbitmqctl add_user admin your_password
8. 设置用户角色为管理员:sudo rabbitmqctl set_user_tags admin administrator
9. 授予权限:sudo rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"
10. 访问管理界面:http://<服务器IP>:15672七、ELK安装
1. ElasticSearch安装
- 下载public singing key:
shell
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch- 创建repository:在/etc/yum.repos.d/下创建文件:elasticsearch.repo,内容是:
shell
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md- 安装:
sudo yum install --enablerepo=elasticsearch elasticsearch - 启动:
sudo systemctl start elasticsearch - 修改:
/etc/elasticsearch/elasticsearch.yml, 其中的network.host: 0.0.0.0和http.port: 9200 xpack.security.enabled: false # 确保安全模块未启用 xpack.security.http.ssl.enabled: false # 确保 HTTP SSL 未启用 - 重启:
sudo systemctl restart elasticsearch
2. Logstash安装
- 下载public singing key:
shell
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch- 创建repository:在/etc/yum.repos.d/下创建文件:logstash.repo,内容:
shell
[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md- 安装:
sudo yum install logstash sudo systemctl start logstash
3. Kibana安装
- 下载public singing key:
shell
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch创建repository:在/etc/yum.repos.d/下创建文件:kibana.repo,内容:
shell
[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md- 安装:
sudo yum install kibana - 启动:
sudo systemctl start kibana - 修改:
/etc/kibana/kibana.yml - 重启kibana:
sudo systemctl restart kibana
八、Harbor安装
- 下载Harbor的安装包
- 解压安装包:
shell
tar xzvf harbor-offline-installer-v2.8.0.tgz
cd harbor- 配置yml:
vim harbor.yml - 启动harobr:
sudo ./install.sh - 访问Harbor:
http://hostname+ip - 密码为
harbor.yml中设置的密码,如果未设置,默认密码是Harbor123