29.基于kubeadm安装kubernetes v1.22 -- 集群部署(二)

230 阅读10分钟

7.集群初始化

官方初始化文档:

kubernetes.io/docs/setup/…

7.1 基于命令初始化高可用master方式

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address=172.31.3.101 --control-plane-endpoint=172.31.3.188 --apiserver-bind-port=6443 --kubernetes-version=v1.22.8 --pod-network-cidr=192.168.0.0/12 --service-cidr=10.96.0.0/12 --service-dns-domain=example.local --image-repository=harbor.raymonds.cc/google_containers --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.22.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.539425 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5lvne8.u7abib9ephwv9s4e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8 

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8

Master01节点配置环境变量,用于访问Kubernetes集群:

[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

[root@k8s-master01 ~]# source /root/.bashrc

#也可以使用下面命令创建环境变量
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点状态:

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   2m22s   v1.22.8

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

root@k8s-master01:~# kubectl get pods -n kube-system -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE     IP             NODE                         NOMINATED NODE   READINESS GATES
coredns-566c4546cf-bs5gn                             0/1     Pending   0          2m44s   <none>         <none>                       <none>           <none>
coredns-566c4546cf-zfs4j                             0/1     Pending   0          2m44s   <none>         <none>                       <none>           <none>
etcd-k8s-master01.example.local                      1/1     Running   0          2m43s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-apiserver-k8s-master01.example.local            1/1     Running   0          2m43s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-controller-manager-k8s-master01.example.local   1/1     Running   0          2m43s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-proxy-zcnkm                                     1/1     Running   0          2m44s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-scheduler-k8s-master01.example.local            1/1     Running   0          2m43s   172.31.3.101   k8s-master01.example.local   <none>           <none>

7.2 基于文件初始化高可用master方式

Master01节点创建kubeadm-config.yaml配置文件如下:

Master01:(# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)

注意

以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复

root@k8s-master01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.8", GitCommit:"7061dbbf75f9f82e8ab21f9be7e8ffcaae8e0d44", GitTreeState:"clean", BuildDate:"2022-03-16T14:08:54Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101 #master01的IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01.example.local #设置master01的hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.31.3.188 #VIP地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443 #haproxy代理后端地址
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers #harbor镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.22.8 #更改版本号
networking:
  dnsDomain: example.local #dnsdomain
  podSubnet: 192.168.0.0/12 #pod网段
  serviceSubnet: 10.96.0.0/12 #service网段
scheduler: {}

更新kubeadm文件

root@k8s-master01:~# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

root@k8s-master01:~# cat new.yml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.31.3.188
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.22.8
networking:
  dnsDomain: example.local
  podSubnet: 192.168.0.0/12
  serviceSubnet: 10.96.0.0/12
scheduler: {}

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

#如果已经初始化过,重新初始化用下面命令reset集群后,再进行初始化
root@k8s-master01:~# kubeadm reset

[root@k8s-master01 ~]# kubeadm init --config /root/kubeadm-config.yaml  --upload-certs
[init] Using Kubernetes version: v1.22.8
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.546267 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b3202c8538092768b83473fc065d2f58450cf284d84d380cc44c033c7bc057f8
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c5c3516af6157181f5d6b922b96d05461a13b4abbc8a5210f68bc3aebac729b1 \
	--control-plane --certificate-key b3202c8538092768b83473fc065d2f58450cf284d84d380cc44c033c7bc057f8

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c5c3516af6157181f5d6b922b96d05461a13b4abbc8a5210f68bc3aebac729b1 

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c5c3516af6157181f5d6b922b96d05461a13b4abbc8a5210f68bc3aebac729b1 \
	--control-plane --certificate-key b3202c8538092768b83473fc065d2f58450cf284d84d380cc44c033c7bc057f8

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c5c3516af6157181f5d6b922b96d05461a13b4abbc8a5210f68bc3aebac729b1 

Master01节点配置环境变量,用于访问Kubernetes集群:

[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

[root@k8s-master01 ~]# source /root/.bashrc

查看节点状态:

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   4m32s   v1.22.8

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

root@k8s-master01:~# kubectl get pods -n kube-system -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE     IP             NODE                         NOMINATED NODE   READINESS GATES
coredns-566c4546cf-bs5gn                             0/1     Pending   0          4m46s   <none>         <none>                       <none>           <none>
coredns-566c4546cf-zfs4j                             0/1     Pending   0          4m46s   <none>         <none>                       <none>           <none>
etcd-k8s-master01.example.local                      1/1     Running   0          4m45s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-apiserver-k8s-master01.example.local            1/1     Running   0          4m45s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-controller-manager-k8s-master01.example.local   1/1     Running   0          4m45s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-proxy-zcnkm                                     1/1     Running   0          4m46s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-scheduler-k8s-master01.example.local            1/1     Running   0          4m45s   172.31.3.101   k8s-master01.example.local   <none>           <none>

8.高可用Master

如果是配置文件初始化集群,不用申请证书,命令行初始化,执行下面命令,申请证书,当前maste生成证书用于添加新控制节点

root@k8s-master01:~# kubeadm init phase upload-certs --upload-certs
I0416 11:35:45.723752   10201 version.go:255] remote version is much newer: v1.23.5; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
55e91c7c877526d2cb3ac62d9f16eb456ebc4a29bf594b3dd11eeda3aee89817

添加master02:

[root@k8s-master02 ~]# kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8 \
	--control-plane --certificate-key 55e91c7c877526d2cb3ac62d9f16eb456ebc4a29bf594b3dd11eeda3aee89817
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.102 172.31.3.188]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   7m19s   v1.22.8
k8s-master02.example.local   NotReady   control-plane,master   50s     v1.22.8

9.Node节点的配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

添加node01:

[root@k8s-node01 ~]# kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   8m21s   v1.22.8
k8s-master02.example.local   NotReady   control-plane,master   112s    v1.22.8
k8s-node01.example.local     NotReady   <none>                 22s     v1.22.8

添加node02:

[root@k8s-node02 ~]# kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   9m15s   v1.22.8
k8s-master02.example.local   NotReady   control-plane,master   2m46s   v1.22.8
k8s-node01.example.local     NotReady   <none>                 76s     v1.22.8
k8s-node02.example.local     NotReady   <none>                 24s     v1.22.8

10.token过期添加新的master和node

注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行

#Token过期后生成新的token:
root@k8s-master01:~# kubeadm token create --print-join-command
kubeadm join 172.31.3.188:6443 --token vnodv1.ef6ym3xgrm0vf3o3 --discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8  

#Master需要生成--certificate-key
root@k8s-master01:~# kubeadm init phase upload-certs  --upload-certs
I0416 11:41:04.681653   11733 version.go:255] remote version is much newer: v1.23.5; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
61ec119b0959758ab0dea5fc6f07d66917381c189a985604422a6460fccef069

添加master03:

[root@k8s-master03 ~]# kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8 \
	--control-plane --certificate-key 61ec119b0959758ab0dea5fc6f07d66917381c189a985604422a6460fccef069
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master03.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.103 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   13m     v1.22.8
k8s-master02.example.local   NotReady   control-plane,master   6m36s   v1.22.8
k8s-master03.example.local   NotReady   control-plane,master   29s     v1.22.8
k8s-node01.example.local     NotReady   <none>                 5m6s    v1.22.8
k8s-node02.example.local     NotReady   <none>                 4m14s   v1.22.8

添加node03:

[root@k8s-node03 ~]# kubeadm join 172.31.3.188:6443 --token 5lvne8.u7abib9ephwv9s4e \
	--discovery-token-ca-cert-hash sha256:0ba069db7fa539826c5ef602fcbf2c0ca0e31f71f3fb2b621f961000fc844cc8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   14m     v1.22.8
k8s-master02.example.local   NotReady   control-plane,master   7m31s   v1.22.8
k8s-master03.example.local   NotReady   control-plane,master   84s     v1.22.8
k8s-node01.example.local     NotReady   <none>                 6m1s    v1.22.8
k8s-node02.example.local     NotReady   <none>                 5m9s    v1.22.8
k8s-node03.example.local     NotReady   <none>                 22s     v1.22.8

11.Calico组件的安装

[root@k8s-master01 ~]# cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # The keys below should be uncommented and the values populated with the base64
  # encoded contents of each file that would be associated with the TLS data.
  # Example command for encoding a file contents: cat <file> | base64 -w 0
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"

  # Configure the MTU to use for workload interfaces and tunnels.
  # By default, MTU is auto-detected, and explicitly setting this field should not be required.
  # You can override auto-detection by providing a non-zero value.
  veth_mtu: "0"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "log_file_path": "/var/log/calico/cni/cni.log",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Pods are monitored for changing labels.
  # The node controller monitors Kubernetes nodes.
  # Namespace and serviceaccount labels are used for policy.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
      - get
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  # EndpointSlices are used for Service-based network policy rule
  # enforcement.
  - apiGroups: ["discovery.k8s.io"]
    resources:
      - endpointslices
    verbs:
      - watch 
      - list
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: docker.io/calico/cni:v3.21.4
          command: ["/opt/cni/bin/install"]
          envFrom:
          - configMapRef:
              # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
              name: kubernetes-services-endpoint
              optional: true
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: docker.io/calico/pod2daemon-flexvol:v3.21.4
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: docker.io/calico/node:v3.21.4
          envFrom:
          - configMapRef:
              # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
              name: kubernetes-services-endpoint
              optional: true
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Enable or Disable VXLAN on the default IP pool.
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the VXLAN tunnel device.
            - name: FELIX_VXLANMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the Wireguard tunnel device.
            - name: FELIX_WIREGUARDMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/calico-node
                - -shutdown
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
            timeoutSeconds: 10
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
            timeoutSeconds: 10
          volumeMounts:
            # For maintaining CNI plugin API credentials.
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
              readOnly: false
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
            - name: policysync
              mountPath: /var/run/nodeagent
            # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
            # parent directory.
            - name: sysfs
              mountPath: /sys/fs/
              # Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
              # If the host is known to mount that filesystem already then Bidirectional can be omitted.
              mountPropagation: Bidirectional
            - name: cni-log-dir
              mountPath: /var/log/calico/cni
              readOnly: true
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        - name: sysfs
          hostPath:
            path: /sys/fs/
            type: DirectoryOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Used to access CNI logs.
        - name: cni-log-dir
          hostPath:
            path: /var/log/calico/cni
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-kube-controllers
          image: docker.io/calico/kube-controllers:v3.21.4
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          livenessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -l
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
            timeoutSeconds: 10
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
            periodSeconds: 10
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0440

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---

# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"

[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml 
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`

[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml 
  etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeHYyQmF5LzVQWE5UNFM5NGxRcFhEczM0QW95WEg2YkM0bHhzdXM1TXUrUVBCMS9qCkpIMWY0aFdMcG96SzRGUlVCK0hLcWtiTGNpdU1sMnF4bUdsRnNraXlDTzEwNndmV1pVTmpLZVQ1OGFSNGFRWnUKTUNIeG9LUU9tdDdKaTJueExEa0Y3VU0wZUF2Y3h3WVlIaW12dFRIaXduVldma0EvVUdya01BM2dQNTcxeVMydgpGQTdwbmUyVDRTdzZpejFxdjlrOUo5eklCblVqTXpMbHpjT1Jnd3FyWnVZZFhTVEt6aEpzaDF1SzdvRjJMb21jCmhJOWN2K2tzelJhUkF1VlJoZHlTelRsSUhXbHdZR3dGSTZBUXo5QlZUelNpY0NobE4vdnhYVGJ1blFvVVBYR3oKN2hEK2hYM2tGR3hzaWNuLzhUOHhLMGUyeEIxb0hmTkJWMGFxVFFJREFRQUJBb0lCQVFDS0hsR2pSYVE1TEFDUApyL1NCZHJEZ3ZNUWxlTkk0alBaK1Nlckthd0ZGOVZFZExFZ3IvcjM3M2pxVmV1b2lkVVh5UnAzMlBMZGY0cXh1Ck9RZFdJOVBWb0xHNmtwOXh0M252eGF0aDIrWkUvSVQ3V1JSSVluY2xxbG5jMDBDUGZSU0NQbjVIcGpzWURCdk4KSlFTK2d1N0wxUGx4UWlLcjlRbzlWa1JXdW5ycGpNQXV0RDlBeVlabUZzZExjdm9QN3ZDZ1JyRzBUTnlYMGxSVQp5MHZEOEQ0RG9RTi85eTFVdjJBeUtmaGlKWndkckcyRlQ0Mm1xWHhGdGJqdHJ3TXdRK3g2ZkRnWmxtNTA5OGgxCmRTNVBqUkpwT0hqQXJFZXpzQ0w2L1R1aGFTM1ZCdWNyRjJSWDlzTWN3NElVL1NRaGNRSnY0VDRFMkF3OVprSDMKUWw5OXprRmRBb0dCQU1yWENwUllHdDBpeFpaNC93a1dtOTZvSXlzMjY4eUx6UUcveUxkM0o1WlBjWUs0ai90SQo4SDJUdlovTUZqRjh2VGY2dHNydVU5Y0NJV0VMcE5SRC90RUJlRWh0K0RXREIrRFRUQ25JUy82emVHbDlUa3BECktKSlh2eC90cFhOeFJDUzNqNjdwVWhNVzFEYkFoV25iVWt6V25FQUEyazM3N1h0N0FsRkJQeUFUQW9HQkFQc2sKTFRDSHZCMFlxSnFjWEdtaDA4NHRpaFYxWXdwTWVkdkxzWW1iaXhxY1ZJdDdTM00wQUszRXd1WEwrc3dkRDN3VQpzak5ZVWkrSTIzWVZGWUZNWXZwSnF3V3Z5aW45V3M0WXQzOVp0Z25ldkw5d1ZPSGROYy84K2lwVllMenJMSVhDCjBSeUhlam9qaU9FbWw1MGRVUGJ3L2psMitiYkFjTTN3bm01WDRCZ2ZBb0dBSG1semVwNWpHTGkzNjBHazYxWHEKZE0yOHlDUHA0cmNyTzVIbnBDcEhVekFDRXRsZExRM1lrTnR2Mk4rZWpEanBuZFVIUnhISW5Eb0RpeXRWd1VXWQpsUG1sY1NhYnlsb2srajV6OHVmSERSOFhkS21HN1hkNFE4aklMUjlTYVNhWVV2RDZwL3p2ZzFFdTdDNmRVaXZDCkFTZEVNZ3h1cnJhUStxeU13bVlIOHVVQ2dZQjJXR2JmSHFkN3dwbjVPY09tbDNhN0N1U3VGZ0lVNFZMdlMzNmYKRE9neEl3Z0lMcEt3ZUFuMTJPbks1aU9LT2d1SStpdTdYRnhtUUFUREVVNXYvcklKcGR0c3M1ajl6b0VTNjd6VQp5MDZvT2drUjRYcHVjUktocjJHbnYxSDExSkRIZTdXVXp4YSsvTzNNdzZYZ2lPY1ZaeDAxc1NZZGczUGs2QVdRCmsrSnJHd0tCZ1FDOTg3c3ZLemU5dHNXTmpYc0s2NkVHcFVNeDNqSDNIQzlCWVNlbTJHNG5XREdycGR6MkJlNVMKZ29NQ1YzZVVXOWVjL0J0WWxIOXQxWnB4cUxZNE9CZVNxS0oxU0FDeEQzZVFpSnhpZCswUTE0emtSY0sreDhlZwp3dmVlUnZBUTdxcCsramx6MTZGdVd0aVJVUTJwWlYxcGp0NjhFbldPVUFrcDY1WHZmTHJ4YkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURhekNDQWxPZ0F3SUJBZ0lJTWg2UzcrTW5Mbzh3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakEwTVRZeE1UTXdNRGhhRncweU16QTBNVFl4TVRNd01EaGFNQ1V4SXpBaApCZ05WQkFNVEdtczRjeTF0WVhOMFpYSXdNUzVsZUdGdGNHeGxMbXh2WTJGc01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXh2MkJheS81UFhOVDRTOTRsUXBYRHMzNEFveVhINmJDNGx4c3VzNU0KdStRUEIxL2pKSDFmNGhXTHBveks0RlJVQitIS3FrYkxjaXVNbDJxeG1HbEZza2l5Q08xMDZ3ZldaVU5qS2VUNQo4YVI0YVFadU1DSHhvS1FPbXQ3SmkybnhMRGtGN1VNMGVBdmN4d1lZSGltdnRUSGl3blZXZmtBL1VHcmtNQTNnClA1NzF5UzJ2RkE3cG5lMlQ0U3c2aXoxcXY5azlKOXpJQm5Vak16TGx6Y09SZ3dxclp1WWRYU1RLemhKc2gxdUsKN29GMkxvbWNoSTljditrc3pSYVJBdVZSaGR5U3pUbElIV2x3WUd3Rkk2QVF6OUJWVHpTaWNDaGxOL3Z4WFRidQpuUW9VUFhHejdoRCtoWDNrRkd4c2ljbi84VDh4SzBlMnhCMW9IZk5CVjBhcVRRSURBUUFCbzRHeE1JR3VNQTRHCkExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0RBWUQKVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCVGFNT3FVSnJEbzM0ZTZxdEtQVDdybnhaZVdMakJPQmdOVgpIUkVFUnpCRmdocHJPSE10YldGemRHVnlNREV1WlhoaGJYQnNaUzVzYjJOaGJJSUpiRzlqWVd4b2IzTjBod1NzCkh3Tmxod1IvQUFBQmh4QUFBQUFBQUFBQUFBQUFBQUFBQUFBQk1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWIKdVRlV2c1czltYnV3OU5Ram9JWXk4azBqaFlzNU9YL3lKV1cxeFk2dmhKTVBsTmt6MDNRelpzN0dRbitrMGFDQQpPZVZLRTgwYS9NRFNZRVdFSnBFSm5oZEV1cDVJdXhiMnpGVlZXc24wS0JzS0RKcTZOTDV3ZW9uT0JVYjRwWWlhCnB4dERwdWFYUUVaQ0Q5WG9tek5sUWRoYTdSZnJ6RmZSSFhKd0hvKy81RHVVRTRGRkFXZTZHaDRCQjZrdlpwOFkKdHp1eFduYmxpSHFvL3lTdUQ4NVFKR1VGWlplZHFQeDFyTjZvMUVIUFRWRVBGOVJ2R3E2dk1IU093WjhGcjBHVgpXMFd1ak1DK3U0dExaQVF1ckRscUNRM0JyS2NqbTU3Z0Q5RUVxWUgzT2dvWmxrejRjQzFHQitwMTd2d3pVWVF0CmZUdk8wck4xL3poSGxXKzI1VW9DCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5VENDQWQyZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURReE5qRXhNekF3T0ZvWERUTXlNRFF4TXpFeE16QXdPRm93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1TejMzOWo5SVR3Clo5ZWpqOGt3aGpRa0ZOdWVPM05XeUV5UmhNYTFFOGVmSlpJVkFmSlQrVEx0dFJLYUlVQUJvMW5zVG02SktHV2MKWjc4UW5JWlpnMDlTMHZWTDhlQTZBRGc1NklweGE2QlJBRTZGbWw2NFdyT3RIZWJzeVFSa0xHcHcrZm9ERjkvdQo3N3FQWEtxUlFqOUlZWURxRUIzUGNuckVVcXhwRXhPUDRUL1lNazRNaUpPa050WkR5YzRqZ2dsME9XZWk4aXlGCmJZR3R6dkl3YW1nTWJ5ZnJzZFl0Y0tjQ0FGTk5LNFNnSnlBbDUxOC9QMndxRU02ck8xQ0pxWEY0bHhPYzV0MkUKT2haOTVvRnZ3bnFRNjFXVXZoejlDOFVZME9mcVF1eDdmekFNZGpFMXhMTWVZMFBraTRvWjlpcnc1VEhVV3pqZgoxdkVyM0VVT3JhVUNBd0VBQWFOV01GUXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGTm93NnBRbXNPamZoN3FxMG85UHV1ZkZsNVl1TUJJR0ExVWRFUVFMTUFtQ0IyVjAKWTJRdFkyRXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2RtbDljQjlmaW9wSExKR1hOZEFWR2Jic1Z3d1NrRApUMjdTVjhESU84cDJnK2NSU3RYczdoU1lCSEhMQjdNK3prOGJzWHA0UWkxd3h3OXh4T0YwRldLOXgwMlMyN0VzClhZS2R0MEdnWExBMWo5N05UWG5xVGVNRHRDcFhOOUd4L25VeG9pZUEwemxCV05pOVJCbVdFVlBJNHdSdTJmN0MKTnlORWJJek43SW5yL2hFZU9ValBkWThFZGVxdjZmL3FwSWV4dElGU3ltclRONmpWTFU1a1NkYndKamhYT0F4NQp1MWE1QW1zVGloR2JwK3BmbEVRcE5Hc0MwWnpvbGRCRVRIanFMT2RwUS9DWUVQcDFjU1FMcHVmTVE0NWgvZk1KClM4MUliSUZGOXNyalVwLzZsN0dEQ2IyTXl4T0JDQ1dNT0g3aGwrWjFpQk50cU5vd29FQ1JuR3c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            - name: CALICO_IPV4POOL_CIDR
              value: 192.168.0.0/12

[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: docker.io/calico/cni:v3.21.4
          image: docker.io/calico/pod2daemon-flexvol:v3.21.4
          image: docker.io/calico/node:v3.21.4
          image: docker.io/calico/kube-controllers:v3.21.4

下载calico镜像并上传harbor

[root@k8s-master01 ~]# vim download_calico_image.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Calico镜像"${END}
    for i in ${images};do 
        docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Calico镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_calico_image.sh

[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml 
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: harbor.raymonds.cc/google_containers/cni:v3.21.4
          image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
          image: harbor.raymonds.cc/google_containers/node:v3.21.4
          image: harbor.raymonds.cc/google_containers/kube-controllers:v3.21.4

root@k8s-master01:~# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-7dd7f59c79-p4ngh             1/1     Running   0             47s
calico-node-9m775                                    1/1     Running   0             47s
calico-node-mvbsp                                    1/1     Running   0             47s
calico-node-ph4p4                                    1/1     Running   0             47s
calico-node-tmgb2                                    1/1     Running   0             47s
calico-node-twxn6                                    1/1     Running   0             47s
calico-node-v4kmn                                    1/1     Running   0             47s

#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE     VERSION
k8s-master01.example.local   Ready    control-plane,master   22m     v1.22.8
k8s-master02.example.local   Ready    control-plane,master   16m     v1.22.8
k8s-master03.example.local   Ready    control-plane,master   9m56s   v1.22.8
k8s-node01.example.local     Ready    <none>                 14m     v1.22.8
k8s-node02.example.local     Ready    <none>                 13m     v1.22.8
k8s-node03.example.local     Ready    <none>                 8m54s   v1.22.8

12.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

[root@k8s-master01 ~]# cat components.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]#  for i in k8s-node01 k8s-node02 k8s-node03;do scp /etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done

修改下面内容:

[root@k8s-master01 ~]# vim components.yaml
...
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
#添加下面内容
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #kubeadm证书文件是front-proxy-ca.crt
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra- 
...
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
#添加下面内容
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki 
...
      volumes:
      - emptyDir: {}
        name: tmp-dir
#添加下面内容
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki 
...  

下载镜像并修改镜像地址

[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2

[root@k8s-master01 ~]# cat download_metrics_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Metrics镜像"${END}
    for i in ${images};do 
        docker pull registry.aliyuncs.com/google_containers/$i
        docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.aliyuncs.com/google_containers/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Metrics镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_metrics_images.sh

[root@k8s-master01 ~]# docker images|grep metrics
harbor.raymonds.cc/google_containers/metrics-server            v0.5.2    f73640fb5061   5 months ago    64.3MB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml

[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: harbor.raymonds.cc/google_containers/metrics-server:v0.5.2

安装metrics server

[root@k8s-master01 ~]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep metrics
metrics-server-69c457f5f-7gtq9                       1/1     Running   0             31s

[root@k8s-master01 ~]# kubectl top node
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   228m         11%    1727Mi          45%       
k8s-master02.example.local   175m         8%     1293Mi          34%       
k8s-master03.example.local   175m         8%     1424Mi          37%       
k8s-node01.example.local     98m          4%     984Mi           25%       
k8s-node02.example.local     87m          4%     960Mi           25%       
k8s-node03.example.local     92m          4%     1045Mi          27%