21.基于kubeadm安装kubernetes v1.21 -- 集群部署(二)

140 阅读11分钟

“ 本文正在参加「金石计划 . 瓜分6万现金大奖」 ”

7.集群初始化

官方初始化文档:

kubernetes.io/docs/setup/…

7.1 基于命令初始化高可用master方式

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address=172.31.3.101 --control-plane-endpoint=172.31.3.188 --apiserver-bind-port=6443 --kubernetes-version=v1.21.8 --pod-network-cidr=192.168.0.0/12 --service-cidr=10.96.0.0/12 --service-dns-domain=example.local --image-repository=harbor.raymonds.cc/google_containers --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.21.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.035186 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: esyrqi.3xy56alsdmb6pc20
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
	--discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
	--discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279  

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
	--discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
	--discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279 

Master01节点配置环境变量,用于访问Kubernetes集群:

[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

[root@k8s-master01 ~]# source /root/.bashrc

#也可以使用下面命令创建环境变量
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点状态:

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master01.example.local   NotReady   control-plane,master   71s   v1.21.8

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE   IP             NODE                         NOMINATED NODE   READINESS GATES
coredns-78db7484ff-9zfrs                             0/1     Pending   0          91s   <none>         <none>                       <none>           <none>
coredns-78db7484ff-m6hhp                             0/1     Pending   0          91s   <none>         <none>                       <none>           <none>
etcd-k8s-master01.example.local                      1/1     Running   0          85s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-apiserver-k8s-master01.example.local            1/1     Running   0          85s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-controller-manager-k8s-master01.example.local   1/1     Running   0          85s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-proxy-n6gmv                                     1/1     Running   0          91s   172.31.3.101   k8s-master01.example.local   <none>           <none>
kube-scheduler-k8s-master01.example.local            1/1     Running   0          85s   172.31.3.101   k8s-master01.example.local   <none>           <none>

7.2 基于文件初始化高可用master方式

Master01节点创建kubeadm-config.yaml配置文件如下:

Master01:(# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)

注意

以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101 #master01的IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01.example.local #设置master01的hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.31.3.188 #VIP地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443 #haproxy代理后端地址
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers #harbor镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.21.8 #更改版本号
networking:
  dnsDomain: example.local #dnsdomain
  podSubnet: 192.168.0.0/12 #pod网段
  serviceSubnet: 10.96.0.0/12 #service网段
scheduler: {}

更新kubeadm文件

root@k8s-master01:~# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

root@k8s-master01:~# cat new.yml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.31.3.188
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.8
networking:
  dnsDomain: example.local
  podSubnet: 192.168.0.0/12
  serviceSubnet: 10.96.0.0/12
scheduler: {}

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

[root@k8s-master01 ~]# kubeadm init --config /root/kubeadm-config.yaml  --upload-certs
[init] Using Kubernetes version: v1.21.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.045056 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
dc421f527c89e2c7de6ca5a5a97f5dc6d340d917166c1b25f0402e4f0e2a5dda
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c3accb0162e710461f5f0b4e800ec9e3a4c37445c92b9f272bf4c8f7ef0c0132 \
	--control-plane --certificate-key dc421f527c89e2c7de6ca5a5a97f5dc6d340d917166c1b25f0402e4f0e2a5dda

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c3accb0162e710461f5f0b4e800ec9e3a4c37445c92b9f272bf4c8f7ef0c0132 

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c3accb0162e710461f5f0b4e800ec9e3a4c37445c92b9f272bf4c8f7ef0c0132 \
	--control-plane --certificate-key dc421f527c89e2c7de6ca5a5a97f5dc6d340d917166c1b25f0402e4f0e2a5dda

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:c3accb0162e710461f5f0b4e800ec9e3a4c37445c92b9f272bf4c8f7ef0c0132 

Master01节点配置环境变量,用于访问Kubernetes集群:

[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

[root@k8s-master01 ~]# source /root/.bashrc

查看节点状态:

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE    VERSION
k8s-master01   NotReady   control-plane,master   113s   v1.21.8

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE    IP             NODE           NOMINATED NODE   READINESS GATES
coredns-78db7484ff-6qndp               0/1     Pending   0          2m7s   <none>         <none>         <none>           <none>
coredns-78db7484ff-snvkx               0/1     Pending   0          2m7s   <none>         <none>         <none>           <none>
etcd-k8s-master01                      1/1     Running   0          2m3s   172.31.3.101   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01            1/1     Running   0          2m3s   172.31.3.101   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01   1/1     Running   0          2m3s   172.31.3.101   k8s-master01   <none>           <none>
kube-proxy-ngpjq                       1/1     Running   0          2m7s   172.31.3.101   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01            1/1     Running   0          2m3s   172.31.3.101   k8s-master01   <none>           <none>

8.高可用Master

如果是配置文件初始化集群,不用申请证书,命令行初始化,执行下面命令,申请证书,当前maste生成证书用于添加新控制节点

[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs
I0304 16:19:48.009205   17314 version.go:254] remote version is much newer: v1.23.4; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b1af381629ca5280af2a7aedc740e214fb1dc6080ef350ad20c369a9d59f8a77

添加master02:

[root@k8s-master02 ~]# kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
  --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279 \
  --control-plane --certificate-key b1af381629ca5280af2a7aedc740e214fb1dc6080ef350ad20c369a9d59f8a77
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.102 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   5m16s   v1.21.8
k8s-master02.example.local   NotReady   control-plane,master   48s     v1.21.8

9.Node节点的配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

添加node01:

[root@k8s-node01 ~]# kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
  --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   6m11s   v1.21.8
k8s-master02.example.local   NotReady   control-plane,master   103s    v1.21.8
k8s-node01.example.local     NotReady   <none>                 21s     v1.21.8

添加node02:

[root@k8s-node02 ~]# kubeadm join 172.31.3.188:6443 --token esyrqi.3xy56alsdmb6pc20 \
  --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   6m54s   v1.21.8
k8s-master02.example.local   NotReady   control-plane,master   2m26s   v1.21.8
k8s-node01.example.local     NotReady   <none>                 64s     v1.21.8
k8s-node02.example.local     NotReady   <none>                 14s     v1.21.8

10.token过期添加新的master和node

注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行

#Token过期后生成新的token:
[root@k8s-master01 ~]# kubeadm token create --print-join-command
kubeadm join 172.31.3.188:6443 --token kav27k.4zmu1vcmnth8j45e --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279  

#Master需要生成--certificate-key
[root@k8s-master01 ~]# kubeadm init phase upload-certs  --upload-certs
I0304 16:25:25.571014   18616 version.go:254] remote version is much newer: v1.23.4; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
53b9704f2898759fe2fa94705d0f366eff051db714ff92252e3ac2437a2a2e73

添加master03:

[root@k8s-master03 ~]# kubeadm join 172.31.3.188:6443 --token kav27k.4zmu1vcmnth8j45e \
     --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279 \
     --control-plane --certificate-key 53b9704f2898759fe2fa94705d0f366eff051db714ff92252e3ac2437a2a2e73
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master03.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.103 172.31.3.188]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   12m     v1.21.8
k8s-master02.example.local   NotReady   control-plane,master   7m40s   v1.21.8
k8s-master03.example.local   NotReady   control-plane,master   2m24s   v1.21.8
k8s-node01.example.local     NotReady   <none>                 6m18s   v1.21.8
k8s-node02.example.local     NotReady   <none>                 5m28s   v1.21.8

添加node03:

[root@k8s-node03 ~]# kubeadm join 172.31.3.188:6443 --token kav27k.4zmu1vcmnth8j45e \
     --discovery-token-ca-cert-hash sha256:c6f88b9432f28dcb4b8b246fa307b9b6336d5837aae7ffd1949c7410674e2279
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   13m     v1.21.8
k8s-master02.example.local   NotReady   control-plane,master   8m52s   v1.21.8
k8s-master03.example.local   NotReady   control-plane,master   3m36s   v1.21.8
k8s-node01.example.local     NotReady   <none>                 7m30s   v1.21.8
k8s-node02.example.local     NotReady   <none>                 6m40s   v1.21.8
k8s-node03.example.local     NotReady   <none>                 22s     v1.21.8

11.Calico组件的安装

[root@k8s-master01 ~]# cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # The keys below should be uncommented and the values populated with the base64
  # encoded contents of each file that would be associated with the TLS data.
  # Example command for encoding a file contents: cat <file> | base64 -w 0
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"
  # Configure the MTU to use for workload interfaces and tunnels.
  # - If Wireguard is enabled, set to your network MTU - 60
  # - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
  # - Otherwise, if IPIP is enabled, set to your network MTU - 20
  # - Otherwise, if not using any encapsulation, set to your network MTU.
  veth_mtu: "1440"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Pods are monitored for changing labels.
  # The node controller monitors Kubernetes nodes.
  # Namespace and serviceaccount labels are used for policy.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
      - get
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: docker.io/calico/cni:v3.15.3
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: docker.io/calico/pod2daemon-flexvol:v3.15.3
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: docker.io/calico/node:v3.15.3
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Enable or Disable VXLAN on the default IP pool.
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the VXLAN tunnel device.
            - name: FELIX_VXLANMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the Wireguard tunnel device.
            - name: FELIX_WIREGUARDMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-kube-controllers
          image: docker.io/calico/kube-controllers:v3.15.3
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"

[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml 
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`

[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml 
  etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkpwdldKZjdpRzhpNFJGeVhXVXlFRFU0dlR5UHprV1IxVE9jaUVpQ0J0N1BsbUJWCmkyOEZVdmZTTFpBbDdZRmJsNTg0STJJV0Y5MU1FOElIRnp4clZyckZ5eVZKdEtwSFZiWEt1OHdUaW81S0FpSEMKUGFrTE43amtrTXRERmJjbDFyaEVVaUxURVRJVFNUYlBmMmE3NU4zc2krTFU5L1packdmQ3hQcTMvQ1NlQjVwdwpDRktzQzhaVi9rbTRoK21LU3pYcmVVbGgyT2tDa3RYWEtReE9Qa1ptTzZDUHB5WkVlV05TL1c2VlArZytZd1I0Ck5KTVE0eUhzODNYTXByMEl4UFk2RVNYNEdVQTQrdVk4VmR6d1Y5aHFKWlcrdy9yMzlSd1VEY2ZhSlA5eXRGZisKVlMwR0dyL053TUd3b0ZZQWJOeGRGUHZJL3prdFRERkJQREZ6cVFJREFRQUJBb0lCQUJ6QWttNzRKSUdGSjlVVgordEJnS0FTdWlHclkrN2RmaGI3eDhsQVlkYklrYjVNbU5vUmVOWHFUaXpnay9KTTdvRUg2Sk8zSCswUkNHV0g5CnQyVUVjZnl6MW9tRXNyclhKcTdiV3YvTU9jSnF0TCtrYzk5QWtSUTZuS1d5UnhUZGFlaFZDUjFZYjhMMFZscFgKLzhRVlhsbWl0M2dQNlpXdnViWDl6NFNHRUZ4ZzJYSmNydWF3algxU1ZGTnRPN2xrU2tqaWgzYjRTb2wvamNNZwo3UExvUUxaOGNvbm5XaUJtUEExRWYzc3N2enBtbkd3M09KRXNJdEhVMEFyQ3VPQ2RxYllHaFpHYWZjdmhmWU1PCnJhKzFIUTg4Tys5VS9ScGkrTVNvelRnUDRTOGtZL1pNbXhmUXAwT2k4d1FRZC9RbmRJMWRuYW5MQy80RlBoSzgKNkVTVFRVMENnWUVBNzdzR0FtZFd2RFhrY1ZVM3hKQk1yK3pyZWV3Nlhla0F0ZGVsU3pBMU81RENWYmQvcEgrNApmOXppd1o4K1dWRC8xWm5wV2NlZ0JBQ0lPVzNsUnUvelJva2NSeXFpRFVGcE5ET0xwWjFEYXJaNzVCekhwQ2QyCjQrNldUdkNDMEVnR0k5enkrYWpKOE5ESjhwcFRPUjR0NDhOR3FTaHorMUdONkRaNU15elpBUmNDZ1lFQXlXY2wKeC9kWkMrVmRCV01uOVpkU0pQcHF1RHQwbVFzS044NG9vSlBIRVhydWphWnd6M3pMcWp0NnJsU1M3SDk2THZWeApaYkxvY1UyQ1hLVVdRaU1vNmhYc1cwa2NmaEJPU0xFQmMrT3o2M0tLWW0zdzl6M3dkYlhGQ1ZPUCtzdlh5bE90CmNkRWNnK1Z2aGZQK0w2VTVZN0d6OW9IL1NnME93d1hPdXk2K3FUOENnWUVBcCsrbUdBejRUOFNaRVdPWE81V3kKZ3hNL0todjREMDFvZC9wbkNyTHN0NXVDNTdVeUw3UmhOUUV4d0YyanVjSHFWbUlKZkNGQjBVdm1JZ1VBTnA5bApGcVo2THNpSTJTeFhYSUEzZFg4amVSLzR6aVh6SE9XZ2ZhL25qOGtnZW5QYUNVbUExTEFQTnltc0xzMDVPNndPCmpaMkFaSU80Sy9oSHBzSnlTUTFEdjJVQ2dZRUFvMHNPUnVNMVAzL251OFo1VDVZdzgrcFZQS3A0RHQzMG11cDcKNWpYcTRURmEyVjVwZU5FbUVBL0ptQzdhTVFYcWVzaGwrSjdsOTNkd2lzMFBEdkNTNjdoNnVraTg0VGszUDVqRQpKTUlwem13LzV5NWNnUm1uTE1rRHlGd0lFTC9WWmlZU0tvWHhLTCtOZkg0blNWb2MvY2ZHc2NjVXhXVnc0bzZDCjN5RTNWT0VDZ1lBNXFHL0t2amxhV3liQndYY3pXVmZWeTJ4VTMwZVNQWVVqWTlUdUR0ZGJqbHFFeTlialZsZzUKWldRb0dKcTVFbjF1YXpUcnc3QlFja2VjaE1zRzBrZkRZSzhZbC9UMThGemxBWDh3TzJaZGlOQnJYVjhGMnRKaQpPYmJwZU45Y0l2ZkVpcjgwOHBVcC9ac05zQWpjMzBERU82THVPblA2VlpmQ1R2Wit4VVJodmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURYVENDQWtXZ0F3SUJBZ0lJRDc0VzRkNnJ0MFl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakF4TVRVd05qTTJNVEJhRncweU16QXhNVFV3TmpNMk1UQmFNQ1V4SXpBaApCZ05WQkFNVEdtczRjeTF0WVhOMFpYSXdNUzVsZUdGdGNHeGxMbXh2WTJGc01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZKcHZXSmY3aUc4aTRSRnlYV1V5RURVNHZUeVB6a1dSMVRPY2lFaUMKQnQ3UGxtQlZpMjhGVXZmU0xaQWw3WUZibDU4NEkySVdGOTFNRThJSEZ6eHJWcnJGeXlWSnRLcEhWYlhLdTh3VAppbzVLQWlIQ1Bha0xON2pra010REZiY2wxcmhFVWlMVEVUSVRTVGJQZjJhNzVOM3NpK0xVOS9aWnJHZkN4UHEzCi9DU2VCNXB3Q0ZLc0M4WlYva200aCttS1N6WHJlVWxoMk9rQ2t0WFhLUXhPUGtabU82Q1BweVpFZVdOUy9XNlYKUCtnK1l3UjROSk1RNHlIczgzWE1wcjBJeFBZNkVTWDRHVUE0K3VZOFZkendWOWhxSlpXK3cvcjM5UndVRGNmYQpKUDl5dEZmK1ZTMEdHci9Od01Hd29GWUFiTnhkRlB2SS96a3RUREZCUERGenFRSURBUUFCbzRHak1JR2dNQTRHCkExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0h3WUQKVlIwakJCZ3dGb0FVQ2ZkNk5va2FXeFJOZlN2Umw4ajk5bU52aUhrd1RnWURWUjBSQkVjd1JZSWFhemh6TFcxaApjM1JsY2pBeExtVjRZVzF3YkdVdWJHOWpZV3lDQ1d4dlkyRnNhRzl6ZEljRXJCOERaWWNFZndBQUFZY1FBQUFBCkFBQUFBQUFBQUFBQUFBQUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZFNRRnlNckFMQWJqcTRDc29QMUoKTThTTU1aRCthMGV6U29xM01EUmJCQWhqWlhEaU5uczMvMWo2aUhTcDUvaWJ6NGRjQnRsaW1HWHk0ek03MGtvcwo5R0JBZzVwaXJXYVFEcWtVSFkxYjdkWUlZSWN4YW9vQWtQeEVoSlZOYTBKYlFyb21qTnJiTVh4MlVsUjVtRGU2CnFMYUtsVDh4WC9zVStSelRxN1VBckxhOWIzWkZvN2V5UkhzZFBUODY3QnZCQnZkNEdMOElxWDdzbVd0VUhLVEkKQWZLMUQrQ3BEUGxNUWE3M1FOOGhvQVRPNTV2ckVjeEFIeDh2VDJ5VUYrYjZaVjJnQm43Z3hJSUNxVUF6OGhWagpTdzMxUVEvTHZ6ME1HZlQ3dFQ0NE52dyt3aHExZVJyNXJ2enRQcml5MUFvSDB1a2hiZ3VEbGExdElOa2FKc01XClRnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURFeE5UQTJNell4TUZvWERUTXlNREV4TXpBMk16WXhNRm93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxRSldKZmVNT3N3CmN4NDZmaUExaWREekU3c05DSk1JRnFXODRxWUdZSEF2ekxBZ2JSS0dXRHdnYi8vM01ZUUhuanhpb1lEN1BjVXUKYnZRRlN0cmtmYW1IWHpaMlAxd0dzRThrSkEvOVhaTStTNWttL0M0UHgxSjFoSHNyYUhjR21wUWYxM3ZCS2IrbgpvdDFHK2lERkZ2ZmdNWVd1U1FvL1M4WGFMZDZTcmZyeTFWOUQzek0zaUF4OGkrVzF3bE41b1hqc0RyRW5XcUFRCmxzVmVteWMxQkZRR0FjSTJLL0dzcXNlUmlUM1dCZ2RhV2JST1RMby83RWoycDdGNHdNcHRiT0kvam56UjM5WkUKSnZ6ZHpvUmJWQlh3NTFXY3cvNFZxOW5aaXJESEY3TWRZVEQ4RXIwRFovd2tYa1FsZ3VidTNBRjNDZEVsSS9wTAoxK1BhaFhvUFZpRUNBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGQW4zZWphSkdsc1VUWDByMFpmSS9mWmpiNGg1TUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQ0lCaGtiQzZ5OTFqNVAvbzB0NjJLeWlDMWdWelJCcHB0NWwvSXZabDRDRVJwVHVWSzJMTkxPZitGbwpMbGVUWlBZTmVWVFVkc2ZYdlVCekYvelpsSjJ6OVdBRUhTbk5Ba0haQVQ4N0tzSGZuRksyQi9NeFFuSEFkMWMzCkNHdzBxQ3RvUVBLdFI1U2UwUngrQUxQSE9iaUEwRG5uN3JESVhuTnBtdkx6VFliY1JTbnVhRTk1cFIwVVBPYzQKWTd5Ulg4MkttRWkxQVR6UEZBNXp2NFg4VnFMbVB2MFNnSjZiRVl1RnM3TUhScFErTkFRZlRBaktLQzg2d3J0QQpUbWlxeUVJU1RtQk03cVliOTl3OWRsWlBVcDIwNS9jZDBmY3ZudTNQQlJRRDhCVWdrOEhtRnhtNG1iZE9wdW9KCktzT05rbVBlNm5ZcDV2dGNiUndKWnlsSzJOdGkKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            - name: CALICO_IPV4POOL_CIDR
              value: 192.168.0.0/12

[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: docker.io/calico/cni:v3.15.3
          image: docker.io/calico/pod2daemon-flexvol:v3.15.3
          image: docker.io/calico/node:v3.15.3
          image: docker.io/calico/kube-controllers:v3.15.3

下载calico镜像并上传harbor

[root@k8s-master01 ~]# vim download_calico_image.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Calico镜像"${END}
    for i in ${images};do 
        docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Calico镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_calico_image.sh

[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml 
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: harbor.raymonds.cc/google_containers/cni:v3.15.3
          image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
          image: harbor.raymonds.cc/google_containers/node:v3.15.3
          image: harbor.raymonds.cc/google_containers/kube-controllers:v3.15.3

[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml 
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-5d8579866c-fmqvq             1/1     Running   0          96s
calico-node-4x957                                    1/1     Running   0          96s
calico-node-m9wll                                    1/1     Running   0          96s
calico-node-ph8vb                                    1/1     Running   0          96s
calico-node-pzf57                                    1/1     Running   0          96s
calico-node-qmn42                                    1/1     Running   0          96s
calico-node-t89ws                                    1/1     Running   0          96s

#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
k8s-master01.example.local   Ready    control-plane,master   26m   v1.21.8
k8s-master02.example.local   Ready    control-plane,master   22m   v1.21.8
k8s-master03.example.local   Ready    control-plane,master   17m   v1.21.8
k8s-node01.example.local     Ready    <none>                 21m   v1.21.8
k8s-node02.example.local     Ready    <none>                 20m   v1.21.8
k8s-node03.example.local     Ready    <none>                 13m   v1.21.8

12.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

[root@k8s-master01 ~]# cat components.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
      - nodes
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
        - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --metric-resolution=30s
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
          image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /livez
              port: https
              scheme: HTTPS
            periodSeconds: 10
          name: metrics-server
          ports:
            - containerPort: 4443
              name: https
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /readyz
              port: https
              scheme: HTTPS
            periodSeconds: 10
          securityContext:
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
          volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
        - emptyDir: {}
          name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]#  for i in k8s-node01 k8s-node02 k8s-node03;do scp /etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done

修改下面内容:

[root@k8s-master01 ~]# vim components.yaml
...
    spec:
      containers:
        - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --metric-resolution=30s
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#添加下面内容        
            - --kubelet-insecure-tls
            - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #注意kubeadm证书文件是front-proxy-ca.crt
            - --requestheader-username-headers=X-Remote-User
            - --requestheader-group-headers=X-Remote-Group
            - --requestheader-extra-headers-prefix=X-Remote-Extra-
...
          volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
#添加下面内容 
            - name: ca-ssl
              mountPath: /etc/kubernetes/pki
...
      volumes:
        - emptyDir: {}
          name: tmp-dir
#添加下面内容 
        - name: ca-ssl
          hostPath:
            path: /etc/kubernetes/pki

下载镜像并修改镜像地址

[root@k8s-master01 ~]# grep "image:" components.yaml 
          image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1

[root@k8s-master01 ~]# cat download_metrics_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Metrics镜像"${END}
    for i in ${images};do 
        docker pull registry.aliyuncs.com/google_containers/$i
        docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.aliyuncs.com/google_containers/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Metrics镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_metrics_images.sh

[root@k8s-master01 ~]# docker images|grep metrics
harbor.raymonds.cc/google_containers/metrics-server            v0.4.1              9759a41ccdf0        14 months ago       60.5MB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml

[root@k8s-master01 ~]# grep "image:" components.yaml 
          image: harbor.raymonds.cc/google_containers/metrics-server:v0.4.1

安装metrics server

[root@k8s-master01 ~]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep metrics
metrics-server-9787b55bd-5gjbf                       1/1     Running   0          52s

[root@k8s-master01 ~]# kubectl top node
W0304 16:48:25.800197   28234 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   156m         7%     1544Mi          40%       
k8s-master02.example.local   148m         7%     1288Mi          33%       
k8s-master03.example.local   135m         6%     1260Mi          32%       
k8s-node01.example.local     66m          3%     782Mi           20%       
k8s-node02.example.local     73m          3%     833Mi           21%       
k8s-node03.example.local     61m          3%     802Mi           21%  

[root@k8s-master01 ~]# kubectl top node --use-protocol-buffers
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   145m         7%     1547Mi          40%       
k8s-master02.example.local   138m         6%     1289Mi          33%       
k8s-master03.example.local   122m         6%     1259Mi          32%       
k8s-node01.example.local     61m          3%     785Mi           20%       
k8s-node02.example.local     68m          3%     833Mi           21%       
k8s-node03.example.local     57m          2%     803Mi           21%