k8s系列-Master集群模式部署

180 阅读4分钟

k8s系列-Master集群模式部署

背景

1.本系列k8s文章全部为内网离线部署。

2.全部为root用户执行。

3.k8s版本为1.22.2

4.适用于生产环境的高可用部署方案。

k8s-系列目录

k8s-系列教程-目录

案例机器

全部为Linux系统。

IP角色
192.168.31.1master01
192.168.31.2master02
192.168.31.3master03
192.168.31.4node01
192.168.31.5node02
192.168.31.6负载IP

一、准备工作

参考上一篇文章的准备工作一、二、三

二、Master三机热备

使用Nginx+Keepalived的方案挂载负载IP。

1.安装部署Nginx

三台机器安装

# 解压并进入Nginx安装包,注意编译时需要添加stream模块。
./configure --with-stream
make && make install

配置

vim /usr/local/nginx/conf/nginx.conf

# 添加内容如下,steam与http同级别
stream {
    upstream k8s-apiserver {
	        server 192.168.31.1:6443;
		server 192.168.31.2:6443;
		server 192.168.31.3:6443;
	}
	server {
	  listen 16443;
	  proxy_pass k8s-apiserver;
	}
}

# 将配置文件同步给其他两台Master
scp /usr/local/nginx/conf/nginx.conf  master02:/usr/local/nginx/conf/nginx.conf
scp /usr/local/nginx/conf/nginx.conf  master03:/usr/local/nginx/conf/nginx.conf

三台启动


cd /usr/local/nginx/sbin
./nginx

2.安装部署Keepalived的方案挂载负载IP

三台安装

mkdir /etc/keepalived
# 解压并进入安装包
./configure --prefix=/usr/local/keepalived  --sysconf=/etc
make && make install


# 启动
service keepalived start

#开机自启
systemctl enable keepalived.service

Master01配置

vim /etc/keepalived/keepalived.conf
# 内容如下
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc  #这些是提醒用的邮件配置,改不改无所谓
   smtp_server 192.168.200.1  #这些是提醒用的邮件配置,改不改无所谓
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 50
    nopreempt
    priority 100
    advert_int 1
    virtual_ipaddress {
        192.168.31.6/24
    }
}

Master02配置

vim /etc/keepalived/keepalived.conf
# 内容如下
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    nopreempt
    priority 80
    advert_int 1
    virtual_ipaddress {
        192.168.31.6/24
    }
}

Master03配置

vim /etc/keepalived/keepalived.conf
# 内容如下
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    nopreempt
    priority 60
    advert_int 1
    virtual_ipaddress {
        192.168.31.6/24
    }
}

3.验证

此时访问http://192.168.31.6 ,会看到Nginx的页面,证明没有问题。

三、Master集群初始化

Master01运行

kubeadm init \
--apiserver-advertise-address=192.168.31.1 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--control-plane-endpoint=192.168.31.6:6443

将Master01的集群配置文件复制到Master02,03机器上。

# 先在Master02,03机器上创建相应文件夹
mkdir -p /etc/kubernetes/pki/etcd/
# 在Master01上执行发送配置文件

scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt  master02:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key  master02:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/ca.key

scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt  master03:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key  master03:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/ca.key

Master02,03作为控制节点加入集群(此命令使用主集群初始化时输出的命令)

kubeadm join 192.168.31.6:6443 --token z8mb13.estz0k7ya7459vop \
      --discovery-token-ca-cert-hash sha256:4e0fc0dd6f329c0803ebdec49eda26dd49f0018d7b522a1b4fa2159f75ef3991 \
      --control-plane

四、Node节点加入

此命令使用主集群初始化时输出的命令

kubeadm join 192.168.31.6:6443 --token z8mb13.estz0k7ya7459vop \
        --discovery-token-ca-cert-hash sha256:4e0fc0dd6f329c0803ebdec49eda26dd49f0018d7b522a1b4fa2159f75ef3991 

五、部署flannel网络插件

# master01执行
cd /home/k8s
kubectl apply -f kube-flannel.yml
chmod a+x flannel
cp flannel /opt/cni/bin/
scp flannel master02:/opt/cni/bin/
scp flannel master03:/opt/cni/bin/
scp flannel node1:/opt/cni/bin/
scp flannel node2:/opt/cni/bin/

六、常见问题及其他操作

k8s系列-单Master模式部署