1. 集群规划
| 主机名 | IP地址 | 说明 |
|---|---|---|
| k8s-master01~03 | 192.168.10.101~103 | master节点 * 3 |
| / | 192.168.10.100 | keepalived虚拟IP(不占用机器) |
| k8s-node01~02 | 192.168.10.104~105 | worker节点 * 2 |
注意:请统一替换这些网段,Pod网段和service和宿主机网段不要重复!!!
| 配置信息 | 备注 |
|---|---|
| 操作系统版本 | rocky9.5 |
| Kubernetes版本 | 1.31.7 |
| Etcd版本 | 3.5.21 |
| Containerd版本 | 1.7.27 |
| Pod网段 | 172.16.0.0/16 |
| Service网段 | 10.96.0.0/16 |
注意:
- 宿主机网段、K8s Service网段、Pod网段不能重复,具体看课程资料的【安装前必看】集群安装网段划分。
- VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和你的主机在同一个局域网内(不是直接用我的VIP)!
- 公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB/NLB的地址,腾讯云内网ELB的地址。不需要再搭建keepalived和haproxy。
- 如果是私有云也需要问一下私有云管理员是否支持VIP!
2. 基本配置
2.1 基础环境配置
- 所有节点更改主机名(其它节点按需修改)
hostnamectl set-hostname k8s-master01
- 所有节点配置hosts,修改
/etc/hosts
vi /etc/hosts
加入:
192.168.10.101 k8s-master01
192.168.10.102 k8s-master02
192.168.10.103 k8s-master03
192.168.10.104 k8s-node01
192.168.10.105 k8s-node02
- 所有节点配置yum源
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak \
/etc/yum.repos.d/*.repo
# 刷新缓存
yum makecache
- 所有节点必备工具安装
yum install -y wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git rsyslog
- 所有节点关闭防火墙、dnsmasq、selinux、开启rsyslog
systemctl disable --now firewalld
systemctl disable --now dnsmasq
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
systemctl enable --now rsyslog
- 所有节点关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
- 所有节点安装ntpdate
yum -y install epel-release
yum -y config-manager --set-enabled epel
yum -y install ntpsec
- 所有节点同步时间并配置上海时区
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
- 所有节点配置limit
ulimit -SHn 65535
vi /etc/security/limits.conf
末尾添加如下内容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
- 所有节点升级系统并重启(学习环境可以不用操作)
yum -y update
- Master01节点免密钥登录其他节点
安装过程中生成配置文件和证书均在 Master01节点上操作,集群管理也在Master01节点上操作。
ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
注意:公有云环境,可能需要把kubectl放在一个非Master节点上。
- Master01节点下载安装所有的源码文件
cd /root/ ; git clone https://gitee.com/bairanping/k8s-ha-install.git
2.2 内核配置
- 所有节点安装ipvsadm
yum install -y ipvsadm ipset sysstat conntrack libseccomp
- 所有节点配置ipvs模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
- 所有节点创建
ipvs.conf并配置开机自动加载
vi /etc/modules-load.d/ipvs.conf
写入:
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
- 所有节点启动服务(报错不用管)
systemctl enable --now systemd-modules-load.service
- 所有节点内核优化配置
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
- 所有节点应用配置
sysctl --system
- 所有节点配置完内核后,重启机器,之后查看内核模块是否已自动加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
返回结果:
2.3 高可用组件安装
注意:
- 如果安装的不是高可用集群,haproxy和keepalived无需安装
- 公有云要用公有云自带的负载均衡,比如阿里云的SLB、NLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的。
- 所有Master节点安装HAProxy和KeepAlived
yum install -y keepalived haproxy
- 所有Master节点配置HAProxy,需要修改黄色部分的IP和端口
# 备份原文件
cd /etc/haproxy
cp haproxy.cfg haproxy.cfg.bak
# 编辑文件
vi haproxy.cfg
写入:
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.10.101:6443 check
server k8s-master02 192.168.10.102:6443 check
server k8s-master03 192.168.10.103:6443 check
- 所有Master节点配置
keepalived.conf文件,需要修改黄色部分的配置
Master01节点配置
vi /etc/keepalived/keepalived.conf
写入:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens160
mcast_src_ip 192.168.10.101
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.10.100
}
track_script {
chk_apiserver
}
}
Master02节点配置
vi /etc/keepalived/keepalived.conf
写入:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
mcast_src_ip 192.168.10.102
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.10.100
}
track_script {
chk_apiserver
}
}
Master03节点配置
vi /etc/keepalived/keepalived.conf
写入:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
mcast_src_ip 192.168.10.103
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.10.100
}
track_script {
chk_apiserver
}
}
- 所有master节点配置KeepAlived健康检查文件
vi /etc/keepalived/check_apiserver.sh
写入:
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
# 配置可执行权限
chmod +x /etc/keepalived/check_apiserver.sh
- 所有master节点启动haproxy和keepalived服务
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的。
- 所有节点测试VIP
# 测试 VIP 是否可以 ping 通
ping 192.168.10.100 -c 4
# 测试 VIP 是否可以访问
telnet 192.168.10.100 16443
注意:
- 如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等。
- 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
- 所有节点查看selinux状态,必须为disable:getenforce
- master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
- master节点查看监听端口:netstat -lntp
- 如果以上都没有问题,需要确认:
6.1 是否是公有云机器
6.2 是否是私有云机器(类似OpenStack)
上述公有云一般都是不支持keepalived,私有云可能也有限制,需要和自己的私有云管理员咨询
3. Runtime安装
如果安装的版本低于1.24,选择Docker和Containerd均可,高于1.24建议选择Containerd作为Runtime,不再推荐使用Docker作为Runtime。
3.1 安装Containerd
- 所有节点配置安装源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- 所有节点安装containerd(如果在以前已经安装过,需要重新安装更新一下)
yum install -y containerd
# 查看版本
ctr --version
- 所有节点配置Containerd所需的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
- 所有节点加载模块
modprobe -- overlay
modprobe -- br_netfilter
- 所有节点配置Containerd所需的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
- 所有节点加载内核
sysctl --system
- 所有节点生成Containerd的配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
- 所有节点更改Containerd的Cgroup和Pause镜像配置
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
- 所有节点启动Containerd并配置开机自启动
systemctl daemon-reload
systemctl enable --now containerd
# 查看cri和overlayfs状态是否正常
ctr plugin ls
返回结果:
- 所有节点配置crictl客户端连接的运行时位置(可选)
# 下载安装包
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.1/crictl-v1.27.1-linux-amd64.tar.gz
# 解压安装
tar zxvf crictl-v1.27.1-linux-amd64.tar.gz -C /usr/local/bin
# 配置 crictl
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
# 验证安装
crictl --version
返回结果:
4. k8s 组件及etcd安装
- Master01节点下载kubernetes安装包(1.31.x 需要更改为你看到的最新版本)
wget https://dl.k8s.io/v1.31.7/kubernetes-server-linux-amd64.tar.gz
最新版获取地址:github.com/kubernetes/…
- Master01节点下载etcd安装包(注意修改版本)
wget https://github.com/etcd-io/etcd/releases/download/v3.5.21/etcd-v3.5.21-linux-amd64.tar.gz
最新版获取地址:github.com/etcd-io/etc…
- Master01节点解压kubernetes安装文件
tar -zxvf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
- Master01节点解压etcd安装文件(注意修改版本)
tar -zxvf etcd-v3.5.21-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.21-linux-amd64/etcd{,ctl}
- Master01节点版本查看
kubelet --version
etcdctl version
返回结果:
- Master01节点将组件发送到其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do
echo $NODE
scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/
scp /usr/local/bin/etcd* $NODE:/usr/local/bin/
done
for NODE in $WorkNodes; do
echo $NODE
scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/
done
- Master01节点下载安装文件
注意:Master01节点切换到1.31.x分支(其他版本可以切换到其他分支,.x即可,不需要更改为具体的小版本)。
cd /root/k8s-ha-install && git checkout manual-installation-v1.31.x
5. 生成证书
注意:二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的。
- Master01节点下载生成证书工具(下载不成功可以去百度网盘)
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
5.1 Etcd证书
- 所有Master节点创建etcd证书目录
mkdir -p /etc/etcd/ssl
- Master01节点生成 etcd CA证书
cd /root/k8s-ha-install/pki
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
- Master01节点生成证书的CSR(证书签名请求文件,配置了一些域名、公司、单位)文件
注意:修改黄色部分的主机名、IP地址(etcd 部署主机的 IP 地址)。
如果后期有扩容的需求,需要在此处预留IP地址,否则无法进行扩容。
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.10.101,192.168.10.102,192.168.10.103 \
-profile=kubernetes etcd-csr.json | cfssljson \
-bare /etc/etcd/ssl/etcd
- Master01节点将证书复制到其他节点
MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do
echo $NODE
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
5.2 K8s组件证书
- 所有节点创建kubernetes相关目录
mkdir -p /etc/kubernetes/pki
- Master01节点生成kubernetes CA证书
cd /root/k8s-ha-install/pki
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
5.2.1 APIServer证书
注意:10.96.0.x是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1。
192.168.10.100是负载均衡的VIP,后面的IP地址是apiserver部署主机的IP地址。
- Master01节点生成apiserver证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,192.168.10.100,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.10.101,192.168.10.102,192.168.10.103 \
-profile=kubernetes apiserver-csr.json | cfssljson \
-bare /etc/kubernetes/pki/apiserver
- Master01节点生成apiserver的聚合证书
# 生成CA正式
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
# 生产聚合证书
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson \
-bare /etc/kubernetes/pki/front-proxy-client
注意:返回结果(忽略警告)。
5.2.2 ControllerManager证书
- Master01节点生成controller-manage证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes manager-csr.json | cfssljson \
-bare /etc/kubernetes/pki/controller-manager
- Master01节点在
<font style="color:rgba(0, 0, 0, 0.85);">kubeconfig</font>文件里设置集群项
注意:修改黄色部分的IP地址和端口,为负载均衡的VIP地址,即apiserver的API地址。
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.10.100:16443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
- Master01节点在
<font style="color:rgba(0, 0, 0, 0.85);">kubeconfig</font>文件里设置环境项
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
- Master01节点在
<font style="color:rgba(0, 0, 0, 0.85);">kubeconfig</font>文件里设置用户项
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
- Master01节点在
<font style="color:rgba(0, 0, 0, 0.85);">kubeconfig</font>文件里设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
5.2.3 Scheduler证书
- Master01节点生成Scheduler的证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes scheduler-csr.json | cfssljson \
-bare /etc/kubernetes/pki/scheduler
- Master01节点在
<font style="color:rgba(0, 0, 0, 0.85);">kubeconfig</font>文件里设置集群项
注意:修改黄色部分的IP地址和端口,为负载均衡的VIP地址,即apiserver的API地址。
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.10.100:16443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
- Master01节点在
kubeconfig文件里设置环境项
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
- Master01节点在
kubeconfig文件里设置用户项
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
- Master01节点在
kubeconfig文件里设置默认环境
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
5.2.4 管理员证书
- Master01节点生成管理员证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson \
-bare /etc/kubernetes/pki/admin
- Master01节点在
kubeconfig文件里设置集群项
注意:修改黄色部分的IP地址和端口,为负载均衡的VIP地址,即apiserver的API地址。
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.10.100:16443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
- Master01节点在
kubeconfig文件里设置环境项
kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
- Master01节点在
kubeconfig文件里设置用户项
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
- Master01节点在
<font style="color:rgb(0,0,0);">kubeconfig</font>文件里设置默认环境
kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
5.2.5 创建ServiceAccount证书
- Master01节点创建一对公钥,用来签发ServiceAccount的Token
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
- Master01节点发送证书至其他节点
for NODE in k8s-master02 k8s-master03; do
echo $NODE
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
- Master01节点查看证书文件
ls /etc/kubernetes/pki/
ls /etc/kubernetes/pki/ |wc -l
返回结果:
6. Kubernetes组件配置
6.1 Etcd配置
注意:Etcd配置大致相同,修改每个Master节点的etcd配置的主机名和IP地址。
6.1.1 Master01
Master01节点创建<font style="color:#000000;">etcd.config.yml</font>文件
注意:修改黄色部分的内容,依次修改etcd配置的主机名和IP地址。
vi /etc/etcd/etcd.config.yml
写入:
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.10.101:2380'
listen-client-urls: 'https://192.168.10.101:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.10.101:2380'
advertise-client-urls: 'https://192.168.10.101:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.10.101:2380,k8s-master02=https://192.168.10.102:2380,k8s-master03=https://192.168.10.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
6.1.2 Master02
Master02节点创建<font style="color:#000000;">etcd.config.yml</font>文件
注意:修改黄色部分的内容,依次修改etcd配置的主机名和IP地址。
vi /etc/etcd/etcd.config.yml
写入:
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.10.102:2380'
listen-client-urls: 'https://192.168.10.102:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.10.102:2380'
advertise-client-urls: 'https://192.168.10.102:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.10.101:2380,k8s-master02=https://192.168.10.102:2380,k8s-master03=https://192.168.10.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
6.1.3 Master03
Master03节点创建<font style="color:#000000;">etcd.config.yml</font>文件
注意:修改黄色部分的内容,依次修改etcd配置的主机名和IP地址。
vi /etc/etcd/etcd.config.yml
写入:
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.10.103:2380'
listen-client-urls: 'https://192.168.10.103:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.10.103:2380'
advertise-client-urls: 'https://192.168.10.103:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.10.101:2380,k8s-master02=https://192.168.10.102:2380,k8s-master03=https://192.168.10.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
6.1.4 启动etcd
- 所有Master节点创建
<font style="color:rgb(0,0,0);">etcd.service</font>文件
vi /usr/lib/systemd/system/etcd.service
写入:
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
- 所有Master节点创建etcd证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
- 所有Master节点启动etcd服务
systemctl daemon-reload
systemctl enable --now etcd
- 查看etcd状态
export ETCDCTL_API=3
etcdctl \
--endpoints="192.168.10.101:2379,192.168.10.102:2379,192.168.10.103:2379" \
--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \
--cert=/etc/kubernetes/pki/etcd/etcd.pem \
--key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status \
--write-out=table
返回结果:
6.2 APIServer配置
6.2.1 Master01
注意:本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改。
Master01节点创建kube-apiserver.service文件
vi /usr/lib/systemd/system/kube-apiserver.service
写入:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.10.101 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.10.101:2379,https://192.168.10.102:2379,https://192.168.10.103:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
6.2.2 Master02
注意:本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改。
Master02节点创建<font style="color:#000000;">kube-apiserver.service</font>文件
vi /usr/lib/systemd/system/kube-apiserver.service
写入:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.10.102 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.10.101:2379,https://192.168.10.102:2379,https://192.168.10.103:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
6.2.3 Master03
注意:本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改。
Master03节点创建<font style="color:#000000;">kube-apiserver.service</font>文件
vi /usr/lib/systemd/system/kube-apiserver.service
写入:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.10.103 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.10.101:2379,https://192.168.10.102:2379,https://192.168.10.103:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
6.2.4 启动apiserver
- 所有Master节点开启kube-apiserver服务
systemctl daemon-reload
systemctl enable --now kube-apiserver
- 所有Master节点检测kube-apiserver状态
systemctl status kube-apiserver
返回结果:
注意:如果系统日志有这些提示可以忽略。
6.3 ControllerManager配置
注意:本文档使用的k8s Pod网段为172.16.0.0/16,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改。
所有Master节点配置kube-controller-manager.service(所有master节点配置一样)。
- 所有Master节点创建
kube-controller-manager.service文件
vi /usr/lib/systemd/system/kube-controller-manager.service
写入:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/16 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
- 所有Master节点启动kube-controller-manager服务
systemctl daemon-reload
systemctl enable --now kube-controller-manager
- 所有Master节点查看启动状态
systemctl status kube-controller-manager
返回结果:
6.4 Scheduler配置
注意:所有Master节点配置kube-scheduler.service(所有master节点配置一样)。
- 所有Master节点创建
kube-scheduler.service文件
vi /usr/lib/systemd/system/kube-scheduler.service
写入:
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--leader-elect=true \
--authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
- 所有Master节点启动kube-scheduler服务
systemctl daemon-reload
systemctl enable --now kube-scheduler
- 所有Master节点查看启动状态
systemctl status kube-scheduler
返回结果:
7. TLS Bootstrapping配置
- Master01节点创建bootstrap证书,设置
<font style="color:rgb(0,0,0);">kubeconfig</font>文件
注意:修改黄色部分的IP地址和端口,为负载均衡的VIP地址,即apiserver的API地址。
cd /root/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.10.100:16443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
- Master01节点查询集群状态
注意:可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障(只要有结果即可,如果返回不一样不影响)。
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
kubectl get cs
返回结果:
- Master01节点创建bootstrap相关资源
kubectl create -f bootstrap.secret.yaml
8. Node节点配置
8.1 复制证书
- Master01节点复制证书至其他节点
cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
echo $NODE
ssh $NODE mkdir -p /etc/kubernetes/pki
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
执行结果:
8.2 Kubelet配置
- 所有节点创建Kubelet配置目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
- 所有节点配置
<font style="color:rgb(0,0,0);">kubelet.service</font>文件
vi /usr/lib/systemd/system/kubelet.service
写入:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
- 所有节点配置
kubelet.service的配置文件(也可以写到kubelet.service)
vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf
写入:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
- 所有节点创建kubelet的配置文件
注意:如果更改了k8s的service网段,需要更改<font style="color:#DF2A3F;background-color:#FBDE28;">kubelet-conf.yml</font>的clusterDNS配置,改成k8s Service网段的第十个地址,比如10.96.0.10。
vi /etc/kubernetes/kubelet-conf.yml
写入:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
- 所有节点启动kubelet服务
systemctl daemon-reload
systemctl enable --now kubelet
- 所有节点查看kubelet服务是否正常
tail -200f /var/log/messages
此时系统日志<font style="color:rgb(0,0,0);">/var/log/messages</font>显示只有如下两种信息为正常,安装calico后即可恢复。如果有很多报错日志,或者有大量看不懂的报错,说明kubelet的配置有误,需要检查kubelet配置。
- Master01节点查看集群状态(Ready或NotReady都正常)
kubectl get node
返回结果:
8.3 kube-proxy配置
- Master01节点创建kube-proxy证书,设置
<font style="color:rgb(0,0,0);">kubeconfig</font>文件
注意:修改黄色部分的IP地址和端口,为负载均衡的VIP地址,即apiserver的API地址。
cd /root/k8s-ha-install/pki
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson \
-bare /etc/kubernetes/pki/kube-proxy
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.10.100:16443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context system:kube-proxy@kubernetes \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context system:kube-proxy@kubernetes \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
- Master01节点将
<font style="color:rgb(0,0,0);">kubeconfig</font>发送至其他节点
for NODE in k8s-master02 k8s-master03; do
echo $NODE
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
for NODE in k8s-node01 k8s-node02; do
echo $NODE
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
- 所有节点添加
<font style="color:rgb(0,0,0);">kube-proxy.service</font>文件
vi /usr/lib/systemd/system/kube-proxy.service
写入:
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
- 所有节点添加
<font style="color:rgb(0,0,0);">kube-proxy.yaml</font>文件
注意:如果更改了集群Pod的网段,需要更改<font style="color:#DF2A3F;background-color:#FBDE28;">kube-proxy.yaml</font>的clusterCIDR为自己的Pod网段。
vi /etc/kubernetes/kube-proxy.yaml
写入:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
- 所有节点启动kube-proxy服务
systemctl daemon-reload
systemctl enable --now kube-proxy
- 所有节点查看kube-proxy服务是否正常
tail -200f /var/log/messages
此时系统日志<font style="color:rgb(0,0,0);">/var/log/messages</font>显示只有如下两种信息为正常,安装calico后即可恢复。
9. 安装Calico
- 所有节点禁止NetworkManager管理Calico的网络接口,防止有冲突或干扰
cat >>/etc/NetworkManager/conf.d/calico.conf<<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
EOF
systemctl daemon-reload
systemctl restart NetworkManager
- Master01节点配置
calico.yaml文件
注意:更改calico的网段,需要将黄色部分的网段,改为自己的Pod网段。
cd /root/k8s-ha-install/calico/
sed -i "s#POD_CIDR#172.16.0.0/16#g" calico.yaml
检查网段是自己的Pod网段
grep "IPV4POOL_CIDR" calico.yaml -A 1
更改后如下所示:
- Master01节点部署calico组件
kubectl apply -f calico.yaml
- Master01节点查看容器状态
kubectl get pod -n kube-system
返回结果:
如果容器状态异常可以使用<font style="color:rgb(0,0,0);">kubectl describe</font>或者<font style="color:rgb(0,0,0);">kubectl logs</font>查看容器的日志。
kubectl describe -n kube-system pod pod_node
kubectl logs -f -n kube-system pod_name
kubectl logs -f -n kube-system pod_name -c upgrade-ipam
日志查看:
10. 安装CoreDNS
- Master01节点配置
coredns.yaml文件
注意:如果更改了k8s service的网段,需要将coredns的serviceIP改成k8s service网段的第十个IP。
cd /root/k8s-ha-install/
COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml
- Master01节点coredns组件
kubectl create -f CoreDNS/coredns.yaml
- Master01节点查看容器状态
kubectl get pod -n kube-system
返回结果:
11. 安装Metrics Server
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
- Master01节点安装metrics-server 组件
cd /root/k8s-ha-install/metrics-server
kubectl create -f .
- Master01节点查看服务状态
kubectl get pod -n kube-system -l k8s-app=metrics-server
返回结果:
- Master01节点查看node、pod资源使用率
kubectl top node
kubectl top pod -A
返回结果:
12. 自动补全配置
- 安装bash completion
yum install -y bash-completion
- 修改配置补全脚本,在文件 ~/.bashrc 中导入(source)补全脚本
echo 'source <(kubectl completion bash)' >> ~/.bashrc
- 将补全脚本添加到目录 /etc/bash_completion.d中
kubectl completion bash >/etc/bash_completion.d/kubectl
- source使脚本生效
source /usr/share/bash-completion/bash_completion
13. 集群可用性验证
- 节点均正常,状态都为Ready
kubectl get node
返回结果:
- Pod均正常,状态都为Running,没有频繁重启
kubectl get pod -A
返回结果:
- 集群网段无任何冲突
# 查看node网段
kubectl get node -o wide
# 查看service网段
kubectl get svc -A
# 查看pod网段
kubectl get pod -A -o wide | grep coredns
返回结果:
- 能够正常创建资源
kubectl create deploy cluster-test --image=registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools --sleep 3600
返回结果:
- Pod 必须能够解析Service(同namespace和跨namespace)
# 进入上一步创建的资源中测试
kubectl exec -it cluster-test-5dbf5c5d-w9n6s -- bash
nslookup kubernetes
nslookup kube-dns.kube-system
返回结果:
- 每个节点都必须要能访问 Kubernetes 的 kubernetes svc 443 和 kube-dns 的 svc 53
curl https://10.96.0.1:443 -k
curl http://10.96.0.10:53 -k
返回结果:
- Pod和Pod之间要能够正常通讯(同 namespace 和跨 namespace)
- Pod 和 Pod 之间要能够正常通讯(同机器和跨机器)