23.K8s运维篇-集群升级 -- kubeadm v1.21 安装方式升级

238 阅读6分钟

“ 本文正在参加「金石计划 . 瓜分6万现金大奖」 ”

1.kubeadm 安装方式升级

升级k8s集群必须 先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证。

1.1 升级准备

在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级。

1.1.1 验证当前k8s master版本

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.8", GitCommit:"4a3b558c52eb6995b3c5c1db5e54111bd0645a64", GitTreeState:"clean", BuildDate:"2021-12-15T14:50:58Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

1.1.2 验证当前k8s node版本

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
k8s-master01.example.local   Ready    control-plane,master   116m   v1.21.8
k8s-master02.example.local   Ready    control-plane,master   111m   v1.21.8
k8s-master03.example.local   Ready    control-plane,master   106m   v1.21.8
k8s-node01.example.local     Ready    <none>                 110m   v1.21.8
k8s-node02.example.local     Ready    <none>                 109m   v1.21.8
k8s-node03.example.local     Ready    <none>                 103m   v1.21.8

1.2 升级k8s master节点版本

升级各k8s master节点版本

1.2.1 查看升级计划

[root@k8s-master01 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.8
[upgrade/versions] kubeadm version: v1.21.8
I0304 18:14:07.428546   97176 version.go:254] remote version is much newer: v1.23.4; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.10
[upgrade/versions] Latest version in the v1.21 series: v1.21.10

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     6 x v1.21.8   v1.21.10

Upgrade to the latest version in the v1.21 series:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.21.8    v1.21.10
kube-controller-manager   v1.21.8    v1.21.10
kube-scheduler            v1.21.8    v1.21.10
kube-proxy                v1.21.8    v1.21.10
CoreDNS                   v1.8.0     v1.8.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.21.10

Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.10.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

1.2.2 升级 k8s master节点版本

master01

#CentOS
[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | grep 1.21.*
kubeadm.x86_64                       1.21.8-0                        @kubernetes
kubeadm.x86_64                       1.21.0-0                        kubernetes 
kubeadm.x86_64                       1.21.1-0                        kubernetes 
kubeadm.x86_64                       1.21.2-0                        kubernetes 
kubeadm.x86_64                       1.21.3-0                        kubernetes 
kubeadm.x86_64                       1.21.4-0                        kubernetes 
kubeadm.x86_64                       1.21.5-0                        kubernetes 
kubeadm.x86_64                       1.21.6-0                        kubernetes 
kubeadm.x86_64                       1.21.7-0                        kubernetes 
kubeadm.x86_64                       1.21.8-0                        kubernetes 
kubeadm.x86_64                       1.21.9-0                        kubernetes 
kubeadm.x86_64                       1.21.10-0                       kubernetes  

#Ubuntu
root@k8s-master01:~# apt-cache madison kubeadm |grep 1.21.*
   kubeadm | 1.21.10-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.21.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages

#ha01和ha02上安装
#CentOS
[root@k8s-ha01 ~]# yum -y install socat

#Ubuntu
root@k8s-master01:~# apt -y install socat

#下线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

#CentOS
[root@k8s-master01 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10 kubectl-1.21.10

#Ubuntu
root@k8s-master01:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00 kubectl=1.21.10-00

[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version v1.21.10
k8s.gcr.io/kube-apiserver:v1.21.10
k8s.gcr.io/kube-controller-manager:v1.21.10
k8s.gcr.io/kube-scheduler:v1.21.10
k8s.gcr.io/kube-proxy:v1.21.10
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

[root@k8s-master01 ~]# cat download_kubeadm_images_1.21-2.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_kubeadm_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

KUBEADM_VERSION=1.21.10
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/"  '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Kubeadm镜像"${END}
    for i in ${images};do 
        docker pull registry.aliyuncs.com/google_containers/$i
        docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.aliyuncs.com/google_containers/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Kubeadm镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_kubeadm_images_1.21-2.sh 

[root@k8s-master01 ~]# kubeadm upgrade apply v1.21.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.10"
[upgrade/versions] Cluster version: v1.21.8
[upgrade/versions] kubeadm version: v1.21.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.10"...
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master01.example.local hash: 1815773c45f159b04fa1cc3dd5470d86
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests023185240"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-04-18-26-36/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: 54ccbfaa343b01755fa5046d37885372
Static pod: kube-apiserver-k8s-master01.example.local hash: e5735eec9cf199a0fab1f50892705433
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-04-18-26-36/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: f5cb290d0dd5336442c0cb3d7d633dd2
Static pod: kube-controller-manager-k8s-master01.example.local hash: fed309dfdc6c6282806ff6363ede3287
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-04-18-26-36/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 933634761590238c2e4a0e136463fc14
Static pod: kube-scheduler-k8s-master01.example.local hash: 4ddfb2d81fc02c53b95b8ba31b7558e4
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.10". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart kubelet

#上线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

master02

#下线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"

#CentOS
[root@k8s-master02 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10 kubectl-1.21.10

#Ubuntu
root@k8s-master02:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00 kubectl=1.21.10-00

[root@k8s-master02 ~]# cat download_kubeadm_images_1.21-3.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_kubeadm_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

KUBEADM_VERSION=1.21.10
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/"  '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Kubeadm镜像"${END}
    for i in ${images};do 
        docker pull ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Kubeadm镜像下载完成"${END}
}

images_download

[root@k8s-master02 ~]# bash download_kubeadm_images_1.21-3.sh 

[root@k8s-master02 ~]# kubeadm upgrade apply v1.21.10

[root@k8s-master02 ~]# systemctl daemon-reload
[root@k8s-master02 ~]# systemctl restart kubelet

#上线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"

master03

#下线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"

#CentOS
[root@k8s-master03 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10 kubectl-1.21.10

#Ubuntu
root@k8s-master03:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00 kubectl=1.21.10-00

[root@k8s-master03 ~]# bash download_kubeadm_images_1.21-3.sh 

[root@k8s-master03 ~]# kubeadm upgrade apply v1.21.10

[root@k8s-master03 ~]# systemctl daemon-reload
[root@k8s-master03 ~]# systemctl restart kubelet

#上线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
k8s-master01.example.local   Ready    control-plane,master   140m   v1.21.10
k8s-master02.example.local   Ready    control-plane,master   135m   v1.21.10
k8s-master03.example.local   Ready    control-plane,master   130m   v1.21.10
k8s-node01.example.local     Ready    <none>                 134m   v1.21.8
k8s-node02.example.local     Ready    <none>                 133m   v1.21.8
k8s-node03.example.local     Ready    <none>                 127m   v1.21.8

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10", GitCommit:"a7a32748b5c60445c4c7ee904caf01b91f2dbb71", GitTreeState:"clean", BuildDate:"2022-02-16T11:22:49Z", GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master01 ~]# kubectl  version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10", GitCommit:"a7a32748b5c60445c4c7ee904caf01b91f2dbb71", GitTreeState:"clean", BuildDate:"2022-02-16T11:24:04Z", GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10", GitCommit:"a7a32748b5c60445c4c7ee904caf01b91f2dbb71", GitTreeState:"clean", BuildDate:"2022-02-16T11:18:16Z", GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master01 ~]# kubelet  --version
Kubernetes v1.21.10

1.3 升级calico

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O

[root@k8s-master01 ~]# vim calico-etcd.yaml
...
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: OnDelete #修改这里,calico不会滚动更新,只有重启了kubelet,才会更新
 template:
    metadata:
      labels:
        k8s-app: calico-node
...

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"

[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
  etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml 
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`

[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
  etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdzFyRE5lanVxbnVDRVBEa01xdjlmcVNWeHhmSzQvQ0wrY2Z3YnY5WHovSS9ScHZmCmNtRjdUM0wzYnBDNS9JMjVuVlZ6N2plb2JUZnN2RFhDWjVIQml1MkYrSzR1SHdna3dtREJYWjZzSUd6clNSS1oKRTdldWhxN1F3SWVqOWtTcTN2RjA5d1NsbWJJeFJaNTZmMXgvYXVMcnFpNUZnanBmWUhMWit3MVdBcUM3WWtVWgoraHoxN3pFRjVnaWlYWE9ucGR1c2VCZDZoVHd1T2JwNmg0clpEb0IrY1VMMXplOTNPTFJGbDBpL2ppRzFzMGw2CmlUaUgwU3d1SFlXQ1NXNWZ5NzBjVDFseUduWG5jNUFwOGRYZWt5dnZYRFBzNEVnVVcvbUczWlpoalAreC9DWTcKNGp6M2dRZVFHYitSMUdRb1djdlh4UGZneGsxK2IwK01ZUjFmVXdJREFRQUJBb0lCQUYwWTQrN05FdkFyNjlBbQovSmtwWGFUOHltUVc4cG11Q1FjQVFaU2tHelQrUFNscEh4TmpZV0I3YVc5SGlWclNMNkxMRm5Sd0VkUDYwdGJlCng4YVRyNmlGaVZMNXJ3RWE0R25Cc21Uck9SdzZ5K1lHOXV4dW5MMlNrZWt1dXZTaHhNeDZSVU55ODNoTGN5KzYKVnFaYmJsMkJ4czFUUDh6UUJLUHlGKytNYTNEVVV1RnhNZ0h3cnFOYVA5bzdOaW9ycTFzZTdHV2F1dHNXSW1HZwpLZjJDREU5SVlHbGQyd1pnWCtoVWhQOU1UcWVKVGJrQVJiUEVsS1BLUDBSeUxaNi9tcE42K0VONGovS0NHeW9PCmNmTzUrazlpUlpwYytldnhpelZkNXNyWER2azBlK1pyU3Y2eUhtV0hpaUxCMm9PN0hkMUVKbjhQa09scE1ISjcKU0hoTzBBRUNnWUVBN29DMFZicmVQNXhZRjUyalI2RUtDUkVLR0lWcExrTTBuNUpSRUdpNElXcWNyWEJ1eGhIYwpQWVgzdnJKRWM5Nm8va28rZTRVNk94cWFISjZRSGNVODFCRTcyc0d0RkY2WHlnNlRkSjRJd2ZIenpKNjlDK2JtCmRhSlNqbG1UeE9GOEhNSkpjdUt3RGRxOFlLNlRHZzN0MXJTcVNtczMzV1BxdG9zbW5Takp0cThDZ1lFQTBhK3kKTGxIWWl5U2NVSG0vY3hYWEZSVkNUWjJqaWo0MzF4MXNkNFE3amc0MlQ0dmNCUElsUmJGQjdRYjVLallZUjZkYQp2cGJMV0hLbE1OblUrM1dvcnJnNjNWNlZwa3lCck1VcnNSemlQaUIxM1lXVENsZjUwdDJERVZ5dS9aWDZPc2FuCjY4MDJwRFc0YnhKcmNPam9aM3BjUm9Fcy96N0RGKzArZStseWlwMENnWUVBdXR2WGJmdDBPUDR5L24yZytYT3cKT3g1QWZLbTVtR2RMQ1dKSFpNWEd6VmVMM1U3ald3ZVBPQnlIMTc0dloyQ2hvbWxrdnIzSXU1bkIrSDQ2aHppSwp5ZE9ldzJ0T1FWRkROeWxvV2N1ZkxPUjFrSEVseC9kbHcvQWpJaWdJWUE0UmdTNnZBUFdkM1p6c1RnczRjUWRNCnVoVGQvbVEyWnB2cnZvMFMrYnFGSHowQ2dZRUFnVnN3UXQ3L0JhZktQdU04dGxTczRUYkNObnVmWGpOUDQ0Y2wKV1AzY2Q2QlE1UFhVLzhBYU9rcEY3Mkd6Nk5TQ1dnSG1PMWx2ak5yOUNZdjRsa0JabFovVndLY1BEdzUzbVF2eQpEa3RSVHg1YldCT0ZTSVpKZWtwcEJ4YjBaVUJXcEZmVlUrUy9ac0kxUzJCRG85NHJNVnNNL2ZuR3RwZ1RadmxXCjZMNTFpUWtDZ1lBUkVRSElYTmhlYW1RSFE1TEpicEZzMFltSzRVZDluL2w1Vng1MEdUbG0vUEx5VlBWWU9TUWUKenYyYS96RHY2dVJ6ZGROU0tpSkFMVUJDZG5RSDRraklBWGg3NDBTQXNGUDZraW4zNm11RDB4RTlEODBOMlNyMgpDL3hQWHdINWp0Ry9jUkdHZGU4SGdjQTg4NkFKYkMyenlxYURpY3h1ejRQcll4Z2dPNG9iTmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURRVENDQWltZ0F3SUJBZ0lJR2VNdy9ETExMK2t3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakF4TWpZeE16TTFOVGRhRncweU16QXhNall4TXpNMU5UZGFNQmN4RlRBVApCZ05WQkFNVERHczRjeTF0WVhOMFpYSXdNVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTU5hd3pYbzdxcDdnaER3NURLci9YNmtsY2NYeXVQd2kvbkg4RzcvVjgveVAwYWIzM0poZTA5eTkyNlEKdWZ5TnVaMVZjKzQzcUcwMzdMdzF3bWVSd1lydGhmaXVMaDhJSk1KZ3dWMmVyQ0JzNjBrU21STzNyb2F1ME1DSApvL1pFcXQ3eGRQY0VwWm15TVVXZWVuOWNmMnJpNjZvdVJZSTZYMkJ5MmZzTlZnS2d1MkpGR2ZvYzllOHhCZVlJCm9sMXpwNlhickhnWGVvVThMam02ZW9lSzJRNkFmbkZDOWMzdmR6aTBSWmRJdjQ0aHRiTkplb2s0aDlFc0xoMkYKZ2tsdVg4dTlIRTlaY2hwMTUzT1FLZkhWM3BNcjcxd3o3T0JJRkZ2NWh0MldZWXovc2Z3bU8rSTg5NEVIa0JtLwprZFJrS0ZuTDE4VDM0TVpOZm05UGpHRWRYMU1DQXdFQUFhT0JsVENCa2pBT0JnTlZIUThCQWY4RUJBTUNCYUF3CkhRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQjhHQTFVZEl3UVlNQmFBRk1Ha3dwKy8KZGdBTjRHdnRDT0ZrbE5OSnBCSkNNRUFHQTFVZEVRUTVNRGVDREdzNGN5MXRZWE4wWlhJd01ZSUpiRzlqWVd4bwpiM04waHdTc0h3Tmxod1IvQUFBQmh4QUFBQUFBQUFBQUFBQUFBQUFBQUFBQk1BMEdDU3FHU0liM0RRRUJDd1VBCkE0SUJBUUFMOE53a0I5aUprc0RnOGp6dFRTTjB4U3pyaXQyK242M2QrV0dGS3l1K2d6Z2pTLzZaOXhYQkpZN3YKL2c1SEZrUnpxTmJXTDdoV0dtY1ZPUGJpQmNpZnJtcmpFNUFMdzhPNmZBVGg2V3RtaVN4RlRwa1Nhc3R5OW82RApJcGlmYzhSTS8rSS9EVWdTQXQ3ZzFucUJodjlxdnFSRWNiM1J1SmRYWTJjNi90LzNZb3gzTUFmVzNJaUVDNUorCkNTSXl2UUtmUDlBWVlXK2F4Y1dQelhXNzEwUVdNTnozZXVQMzJqZENkanBzbFVLNldpaHJQYjdnaURTdDdFVFYKWk5EeEh4NUp3WXlpYmFxbGQzQUlicFhNRmxnY2NubWttM0pwWnIrTUI4bGlGYThHZlU5L005N1ZueXFZN0huNgpDNkdXTWlJNWFvc0lGaE9INUJ3NFFNa0NzSXlvCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURFeU5qRXpNelUxTjFvWERUTXlNREV5TkRFek16VTFOMW93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5OcnNlMnd1aG45CnVKazdpOEZLN1A5SHU1YlIwOEY5R0VQTEtBelg0NHdKSE45QjZ1MTBrZmNoL0dlRTBremErQjhJejQ1UjRrdmUKbVpVYS9XbmVXdjNxRm1odURiTEdvU1B1Ykl1aXh1aTkwdllzbjBaRUJCV2dMbWdRNkdHNnd5OWFNbG55VGlWYworOTdSNTg2b3dMVGRTU3NiNjd2c0w0U2U0U2lXOHdTQTQ2K3FXSEJKNHc5Q2s2QXljam9vbDBMbXREVkJ1QlpqCjlNeWdDbUE4M3lkTnV4eUhDSGJpM2FRdkovVUNyQnoyNk5zYTVha1NlMlRQNGJ1US9PWjBIYnhsNUE5NXIyeGgKNkM1NGx3cHFLeTkxb2craWQ1ZlZMRFVVdDR0d1pvd0dITnZxMWRrRnI3VjA2SDJjdXo0eXlMajQ0a0xPNk9LMgo4OGplaWhBREhiY0NBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGTUdrd3ArL2RnQU40R3Z0Q09Ga2xOTkpwQkpDTUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQTI5SWlldmxmdk9MSmFCOGlNR3ZLTmVXMExGZkhkTGRoeUl3T2dDeThjaHNaTVR5SjhXdzFrUUVKUgozbTY0MGVHK2UvOTE0QmE5Wk5iL0NQMkN1eHA2MkQrVWl0U0FjS001NWtTNURuVEVrcURwbVdETjdLTjhISk1QCkcwdlRXYnNrVTVicXJqb0JVQVNPNUsxeDl4WENSUDU2elBVZ3E5QTY4SmM4N1Mya29PNk56Mm53ZE9zc042TW0KRzFNQmdHQ2lqQXB3MDZJM2NuT1ExcFFhVk1RNVovT0tDSEoyTFFFUFJISVZqb2E4clBBcmNyYXFGUnpPeTk4agpOc3FxcWYvNDhMamVwZDZvOFlZc08zRng2M3c2YmhaOG94WDFxT090WTRlQ0pPeWRTZkRMR21tYkMrc1ozZlJiCjU0RkVLQ1RWKzhqQjBYNmZJYjl2OHg3WU5MNFgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml 
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
            - name: CALICO_IPV4POOL_CIDR
              value: 192.168.0.0/12

[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: docker.io/calico/cni:v3.22.1
          image: docker.io/calico/pod2daemon-flexvol:v3.22.1
          image: docker.io/calico/node:v3.22.1
          image: docker.io/calico/kube-controllers:v3.22.1

下载calico镜像并上传harbor

[root@k8s-master01 ~]# cat download_calico_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Calico镜像"${END}
    for i in ${images};do 
        docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Calico镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_calico_images.sh

[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml 
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml 
          image: harbor.raymonds.cc/google_containers/cni:v3.21.4
          image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
          image: harbor.raymonds.cc/google_containers/node:v3.21.4
          image: harbor.raymonds.cc/google_containers/kube-controllers:v3.21.4

[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml 
secret/calico-etcd-secrets configured
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

[root@k8s-master01 ~]# vim calico-etcd.yaml
...
apiVersion: policy/v1
...

[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml 
secret/calico-etcd-secrets unchanged
configmap/calico-config unchanged
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers unchanged
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers configured


#下线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01
calico-node-4x957                                    1/1     Running   0          150m    172.31.3.101      k8s-master01.example.local   <none>           <none>

[root@k8s-master01 ~]# kubectl get pod calico-node-4x957 -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3

#镜像并没有升级

[root@k8s-master01 ~]# kubectl delete pod calico-node-4x957 -n kube-system
pod "calico-node-4x957" deleted
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01
calico-node-dl9z8                                    1/1     Running   0          19s     172.31.3.101      k8s-master01.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-dl9z8 -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: calico/node:v3.22.1
    image: calico/cni:v3.22.1
    image: calico/pod2daemon-flexvol:v3.22.1

#上线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

#下线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"

[root@k8s-master01 ~]#  kubectl get pod -n kube-system -o wide |grep calico |grep master02
calico-node-t89ws                                    1/1     Running   0          153m    172.31.3.102      k8s-master02.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-t89ws  -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-t89ws -n kube-system
pod "calico-node-t89ws" deleted
[root@k8s-master01 ~]#  kubectl get pod -n kube-system -o wide |grep calico |grep master02
calico-node-29xv5                                    0/1     Init:0/2   0          4s      172.31.3.102      k8s-master02.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-29xv5  -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1

#上线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"

#下线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"

[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03
calico-node-m9wll                                    1/1     Running   0          157m    172.31.3.103      k8s-master03.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-m9wll -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-m9wll -n kube-system
pod "calico-node-m9wll" deleted
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03
calico-node-cnpb9                                    0/1     PodInitializing   0          8s      172.31.3.103      k8s-master03.example.local   <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-cnpb9 -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1

#上线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"

1.4 升级k8s node节点版本

[root@k8s-master01 ~]# kubectl drain k8s-node01.example.local --delete-emptydir-data --force --ignore-daemonsets

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS                     ROLES                  AGE    VERSION
k8s-master01.example.local   Ready                      control-plane,master   3h8m   v1.21.10
k8s-master02.example.local   Ready                      control-plane,master   3h4m   v1.21.10
k8s-master03.example.local   Ready                      control-plane,master   178m   v1.21.10
k8s-node01.example.local     Ready,SchedulingDisabled   <none>                 3h2m   v1.21.8
k8s-node02.example.local     Ready                      <none>                 3h1m   v1.21.8
k8s-node03.example.local     Ready                      <none>                 175m   v1.21.8

#CentOS
[root@k8s-node01 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10

#Ubuntu
root@k8s-node01:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00

[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl restart kubelet

[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01
kube-system            calico-node-qmn42                                    1/1     Running   0          164m    172.31.3.108      k8s-node01.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-qmn42 -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-qmn42 -n kube-system 
pod "calico-node-qmn42" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01
kube-system            calico-node-x7xw5                                    1/1     Running   0          25s     172.31.3.108      k8s-node01.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-x7xw5 -n kube-system -o yaml |grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1

[root@k8s-master01 ~]# kubectl uncordon k8s-node01.example.local
node/k8s-node01.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE     VERSION
k8s-master01.example.local   Ready    control-plane,master   3h12m   v1.21.10
k8s-master02.example.local   Ready    control-plane,master   3h7m    v1.21.10
k8s-master03.example.local   Ready    control-plane,master   3h2m    v1.21.10
k8s-node01.example.local     Ready    <none>                 3h6m    v1.21.10
k8s-node02.example.local     Ready    <none>                 3h5m    v1.21.8
k8s-node03.example.local     Ready    <none>                 179m    v1.21.8

[root@k8s-master01 ~]# kubectl drain k8s-node02.example.local --delete-emptydir-data --force --ignore-daemonsets

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS                     ROLES                  AGE     VERSION
k8s-master01.example.local   Ready                      control-plane,master   3h13m   v1.21.10
k8s-master02.example.local   Ready                      control-plane,master   3h8m    v1.21.10
k8s-master03.example.local   Ready                      control-plane,master   3h3m    v1.21.10
k8s-node01.example.local     Ready                      <none>                 3h7m    v1.21.10
k8s-node02.example.local     Ready,SchedulingDisabled   <none>                 3h6m    v1.21.8
k8s-node03.example.local     Ready                      <none>                 3h      v1.21.8

#CentOS
[root@k8s-node02 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10

#Ubuntu
root@k8s-node03:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00

[root@k8s-node02 ~]# systemctl daemon-reload
[root@k8s-node02 ~]# systemctl restart kubelet

[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1
kube-system            calico-node-ph8vb                                    0/1     Running   0          169m    172.31.3.109      k8s-node02.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod  calico-node-ph8vb -n kube-system -o yaml| grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-ph8vb -n kube-system 
pod "calico-node-ph8vb" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1
kube-system            calico-node-wfrxd                                    1/1     Running   0          33s     172.31.3.109      k8s-node02.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod  calico-node-wfrxd -n kube-system -o yaml| grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1

[root@k8s-master01 ~]# kubectl uncordon k8s-node02.example.local
node/k8s-node02.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE     VERSION
k8s-master01.example.local   Ready    control-plane,master   3h16m   v1.21.10
k8s-master02.example.local   Ready    control-plane,master   3h12m   v1.21.10
k8s-master03.example.local   Ready    control-plane,master   3h6m    v1.21.10
k8s-node01.example.local     Ready    <none>                 3h10m   v1.21.10
k8s-node02.example.local     Ready    <none>                 3h9m    v1.21.10
k8s-node03.example.local     Ready    <none>                 3h3m    v1.21.8

[root@k8s-master01 ~]# kubectl drain k8s-node03.example.local --delete-emptydir-data --force --ignore-daemonsets

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS                     ROLES                  AGE     VERSION
k8s-master01.example.local   Ready                      control-plane,master   3h17m   v1.21.10
k8s-master02.example.local   Ready                      control-plane,master   3h13m   v1.21.10
k8s-master03.example.local   Ready                      control-plane,master   3h7m    v1.21.10
k8s-node01.example.local     Ready                      <none>                 3h11m   v1.21.10
k8s-node02.example.local     Ready                      <none>                 3h10m   v1.21.10
k8s-node03.example.local     Ready,SchedulingDisabled   <none>                 3h4m    v1.21.8

#CentOS
[root@k8s-node03 ~]# yum -y install kubeadm-1.21.10 kubelet-1.21.10

#Ubuntu
root@k8s-node03:~# apt -y install kubeadm=1.21.10-00 kubelet=1.21.10-00

[root@k8s-node03 ~]# systemctl daemon-reload
[root@k8s-node03 ~]# systemctl restart kubelet

[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1
kube-system            calico-node-pzf57                                    0/1     Running   0          173m    172.31.3.110      k8s-node03.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-pzf57 -n kube-system -o yaml| grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
    image: harbor.raymonds.cc/google_containers/node:v3.15.3
    image: harbor.raymonds.cc/google_containers/cni:v3.15.3
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-pzf57 -n kube-system 
pod "calico-node-pzf57" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1
kube-system            calico-node-clg2c                                    1/1     Running   0          24s     172.31.3.110      k8s-node03.example.local     <none>           <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-clg2c -n kube-system -o yaml| grep "image:"
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
  - image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1
    image: harbor.raymonds.cc/google_containers/node:v3.22.1
    image: harbor.raymonds.cc/google_containers/cni:v3.22.1
    image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.22.1

[root@k8s-master01 ~]# kubectl uncordon k8s-node03.example.local
node/k8s-node03.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE     VERSION
k8s-master01.example.local   Ready    control-plane,master   3h20m   v1.21.10
k8s-master02.example.local   Ready    control-plane,master   3h15m   v1.21.10
k8s-master03.example.local   Ready    control-plane,master   3h10m   v1.21.10
k8s-node01.example.local     Ready    <none>                 3h14m   v1.21.10
k8s-node02.example.local     Ready    <none>                 3h13m   v1.21.10
k8s-node03.example.local     Ready    <none>                 3h7m    v1.21.10

1.5 升级metrics-server

github.com/kubernetes-…

[root@k8s-master01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

#修改成下面内容
[root@k8s-master01 ~]# vim components.yaml
...
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
#添加下面内容
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-  
...
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
#添加下面内容    
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
...
      volumes:
      - emptyDir: {}
        name: tmp-dir
#添加下面内容
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki   
...

[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1

[root@k8s-master01 ~]# cat download_metrics_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Metrics镜像"${END}
    for i in ${images};do 
        docker pull registry.aliyuncs.com/google_containers/$i
        docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.aliyuncs.com/google_containers/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Metrics镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_metrics_images.sh

[root@k8s-master01 ~]# docker images |grep metrics
harbor.raymonds.cc/google_containers/metrics-server            v0.6.1              f73640fb5061        8 weeks ago         64.3MB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml 

[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: harbor.raymonds.cc/google_containers/metrics-server:v0.6.1

[root@k8s-master01 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-75c8898f9f-fkmd8                      1/1     Running   0          36s

[root@k8s-master01 ~]# kubectl top node --use-protocol-buffers
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   167m         8%     2456Mi          64%       
k8s-master02.example.local   207m         10%    1470Mi          38%       
k8s-master03.example.local   162m         8%     1359Mi          35%       
k8s-node01.example.local     104m         5%     1064Mi          27%       
k8s-node02.example.local     75m          3%     1032Mi          27%       
k8s-node03.example.local     103m         5%     962Mi           25%

1.6 升级dashboard

官方GitHub地址:github.com/kubernetes/…

可以在官方dashboard查看到最新版dashboard 在这里插入图片描述

root@k8s-master01:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

[root@k8s-master01 ~]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #添加这行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30005 #添加这行
  selector:
    k8s-app: kubernetes-dashboard
...

[root@k8s-master01 ~]# grep "image:" recommended.yaml 
          image: kubernetesui/dashboard:v2.5.0
          image: kubernetesui/metrics-scraper:v1.0.7

[root@k8s-master01 ~]# cat download_dashboard_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_dashboard_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' recommended.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Dashboard镜像"${END}
    for i in ${images};do 
        docker pull registry.aliyuncs.com/google_containers/$i
        docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        docker rmi registry.aliyuncs.com/google_containers/$i
        docker push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Dashboard镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_dashboard_images.sh

root@k8s-master01:~# docker images |grep -E "(dashboard|metrics-scraper)"
harbor.raymonds.cc/google_containers/dashboard                 v2.5.0     57446aa2002e   4 weeks ago     223MB
harbor.raymonds.cc/google_containers/metrics-scraper           v1.0.7     7801cfc6d5c0   8 months ago    34.4MB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' recommended.yaml 

[root@k8s-master01 ~]# grep "image:" recommended.yaml 
          image: harbor.raymonds.cc/google_containers/dashboard:v2.5.0
          image: harbor.raymonds.cc/google_containers/metrics-scraper:v1.0.7

[root@k8s-master01 ~]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

创建管理员用户admin.yaml

[root@k8s-master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

[root@k8s-master01 ~]# kubectl apply -f admin.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

https://172.31.3.101:30005

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-tzh6x
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: c79c4e80-0d34-488d-950a-cdd18f4f687e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InhHSlloc2pJTWNxdVVhSlNrR0UwY3pLd1JLSk1IeThBcDVPWjZ4dW5OMU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXR6aDZ4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNzljNGU4MC0wZDM0LTQ4OGQtOTUwYS1jZGQxOGY0ZjY4N2UiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.5DQGMa9Xfq1VtPFrnvfZoytYGT1kmxUj5XqmkCAM51_G5sckAhE79QBXRwBNrCJyQriMfeh3iuvCweKqyfUZUV7lAh6zopZlY2VvEge40ch_9mtSX0t2i-kces4W1ZT7bZfkX20XS6fcshgsC1IJJEeNtsi-EXWKUiC1p-6KpAwQmdWzfHGhfKZZQVgShbq6uHKjaqEfqvVH2aLxHZbmdLXU8JKwegg4arRygeRejHWDc2DLLoMHBfCC46dA_Ntjex9gFTkcwGH4HlN9-WpF94gHMHSo4Rj7To5Q9OLap1BC70sElSbY5juf4orDcLJVtrFMIwmVizUkyTle8jQlgQ

在这里插入图片描述