kubeadm 升级高可用集群版本分享(三节点master)

619 阅读16分钟

升级前注意事项:

  1. 在对 kubelet 作次版本升版时需要腾空节点。 对于控制面节点,其上可能运行着 CoreDNS Pods 或者其它非常重要的负载。
  2. 升级后,static pod容器的hash值被改变,所以会被重新启动。
  3. 升级时 kubeadm 版本必须大于或等于目标版本
  4. 升级官方的说法是不支持跨大版本升级,但是这个大版本不是很清楚多大的版本的,目前我实验的是从1.18.8升级到了1.19.9可以顺利的进行升级

查看环境目前的版本信息

`[root@dm01 ~]# kubectl get nodes`
NAME   STATUS   ROLES    AGE    VERSION
dm01   Ready    master   181d   v1.18.8
dm02   Ready    master   181d   v1.18.8
dm03   Ready    master   181d   v1.18.8

`[root@dm01 ~]# kubectl version`
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

升级工作的基本流程如下:

升级主控制平面节点

保存kubeadm config文件,修改指定内容

`[root@dm01 ~]# kubeadm config view > kubeadm-config.yaml`
`[root@dm01 ~]# cat kubeadm-config.yaml `
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.11:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/k8sxio  //修改镜像源为阿里云的镜像源,因为默认的源站需要翻墙才可以**
kind: ClusterConfiguration
kubernetesVersion: v1.19.9    //修改为我们需要升级到的版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.1.0.0/16
scheduler: {}
//同样把kubeadm config文件复制给其他控制平面节点一份
`[root@dm01 ~]# scp kubeadm-config.yaml dm02:/root/`
kubeadm-config.yaml                                                                                              100%  521   511.2KB/s   00:00    
`[root@dm01 ~]# scp kubeadm-config.yaml dm03:/root/`
kubeadm-config.yaml                                                                                              100%  521    42.9KB/s   00:00    

驱逐节点并不可调度作升级准备

`[root@dm01 ~]# kubectl drain dm01 --ignore-daemonsets`
node/dm01 cordoned
WARNING: ignoring DaemonSet-managed Pods: istio-system/istio-ingressgateway-jvtf6, kube-system/kube-flannel-ds-qrzbm, kube-system/kube-proxy-5xvbm, test008/nginx-d26b8
evicting pod kube-system/coredns-84b99c4749-6j894

pod/coredns-84b99c4749-6j894 evicted
node/dm01 evicted

`[root@dm03 ~]# kubectl get nodes` 
NAME   STATUS                     ROLES    AGE    VERSION
dm01   Ready,SchedulingDisabled   master   181d   v1.18.8
dm02   Ready                      master   181d   v1.18.8
dm03   Ready                      master   181d   v1.18.8

下载安装kubectl,kubeadm 1.19.9版本

`[root@dm01 ~]# yum list kubeadm --showduplicates`

`[root@dm01 ~]# yum install kubeadm-1.19.9 kubectl-1.19.9`

查看升级计划

`[root@dm01 ~]# kubeadm upgrade plan`
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.8
[upgrade/versions] kubeadm version: v1.19.9
I0326 10:39:13.933676   12080 version.go:255] remote version is much newer: v1.20.5; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v1.19.9
[upgrade/versions] Latest stable version: v1.19.9
[upgrade/versions] Latest version in the v1.18 series: v1.18.17
[upgrade/versions] Latest version in the v1.18 series: v1.18.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.18.8   v1.18.17

Upgrade to the latest version in the v1.18 series:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.18.8   v1.18.17
kube-controller-manager   v1.18.8   v1.18.17
kube-scheduler            v1.18.8   v1.18.17
kube-proxy                v1.18.8   v1.18.17
CoreDNS                   1.7.0     1.7.0
etcd                      3.4.3-0   3.4.3-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.18.17

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.18.8   v1.19.9

Upgrade to the latest stable version:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.18.8   v1.19.9
kube-controller-manager   v1.18.8   v1.19.9
kube-scheduler            v1.18.8   v1.19.9
kube-proxy                v1.18.8   v1.19.9
CoreDNS                   1.7.0     1.7.0
etcd                      3.4.3-0   3.4.13-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.19.9

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

先使用 dry-run 命令查看升级信息:

`[root@dm01 ~]# kubeadm upgrade apply v1.19.9 --config kubeadm-config.yaml --dry-run`   //注意要通过 --config 指定上面保存的配置文件,该配置文件信息包含了上一个版本的集群信息以及修改过后的镜像地址

确认无误后就可以执行升级操作了,提前下载所需镜像:

`[root@dm01 ~]# kubeadm config images pull --config kubeadm-config.yaml `
W0326 10:42:34.394143   14217 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.19.9
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.19.9
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.19.9
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.19.9
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/k8sxio/coredns:1.7.0

然后就可以进行升级操作

`[root@dm01 ~]# kubeadm upgrade apply v1.19.9 --config kubeadm-config.yaml`
[upgrade/config] Making sure the configuration is correct:
W0326 10:45:24.222769   15899 common.go:94] WARNING: Usage of the --config flag with kubeadm config types for reconfiguring the cluster during upgrade is not recommended!
W0326 10:45:24.348052   15899 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.9"
[upgrade/versions] Cluster version: v1.18.8
[upgrade/versions] kubeadm version: v1.19.9
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.9"...
Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272
Static pod: kube-controller-manager-dm01 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-scheduler-dm01 hash: dc2f9c30c2d972efbe2ce45cf611390e
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: d7d8e22d7a5881f06ab297fe7e173b67
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests344416903"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272
Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272
Static pod: kube-apiserver-dm01 hash: f183b8963cde0fb805d7171f5af486b8
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-dm01 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-controller-manager-dm01 hash: 6bacc4dd0c352be2b3c3b40e6a9f92c2
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-dm01 hash: dc2f9c30c2d972efbe2ce45cf611390e
Static pod: kube-scheduler-dm01 hash: 357b7cba3cee5370b9c9a360984db687
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.9". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
//看到这样提示success了就说明升级成功了

升级kubelet并重启

`[root@dm01 ~]# yum install kubelet-1.19.9-0`
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * epel: hkg.mirror.rackspace.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> Running transaction check
---> Package kubelet.x86_64 0:1.19.2-0 will be updated
---> Package kubelet.x86_64 0:1.19.9-0 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================================================================================
 Package                           Arch                             Version                             Repository                            Size
===================================================================================================================================================
Updating:
 kubelet                           x86_64                           1.19.9-0                            kubernetes                            20 M

Transaction Summary
===================================================================================================================================================
Upgrade  1 Package

Total download size: 20 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
91b94430e5a7b65479ba816cf352514c857cc21bc4cd2c5019d76d62610c60ab-kubelet-1.19.9-0.x86_64.rpm                                |  20 MB  00:00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : kubelet-1.19.9-0.x86_64                                                                                                         1/2 
  Cleanup    : kubelet-1.19.2-0.x86_64                                                                                                         2/2 
  Verifying  : kubelet-1.19.9-0.x86_64                                                                                                         1/2 
  Verifying  : kubelet-1.19.2-0.x86_64                                                                                                         2/2 

Updated:
  kubelet.x86_64 0:1.19.9-0                                                                                                                        

Complete!

//重启kubelet
`[root@dm01 ~]# systemctl daemon-reload`
`[root@dm01 ~]# systemctl restart kubelet`

查看版本,第一个控制平面升级完成

`[root@dm01 ~]# kubelet --version`
Kubernetes v1.19.9
`[root@dm01 ~]# kubectl get nodes`
NAME   STATUS                      ROLES    AGE    VERSION
dm01   Ready,SchedulingDisabled    master   181d   v1.19.9  //可以看到一个节点已经升级为了我们指定的版本
dm02   Ready                       master   181d   v1.18.8
dm03   Ready                       master   181d   v1.18.8

最后对已经升级的这个节点解除禁止调度

`[root@dm01 ~]# kubectl drain dm01 --delete-local-data --ignore-daemonsets --force`

升级其他控制平面节点

:其余的控制平面操作相同,这边展示一个的操作方法

同样设置禁止调度

`[root@dm03 ~]# kubectl drain dm02 --ignore-daemonsets`

下载安装kubectl,kubeadm

`[root@dm03 ~]# yum install kubeadm-1.19.9 kubectl-1.19.9`

同样先拉取镜像

`[root@dm03 ~]# kubeadm config images pull --config kubeadm-config.yaml`

执行更新命令

//:由于 apiserver 等组件配置已经在升级第一个 master 时上传到了集群的 configMap 中,所以事实上其他 master 节点只是正常拉取然后重启相关组件既可;

`[root@dm03 ~]# kubeadm upgrade node` 
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.9"...
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-controller-manager-dm03 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-scheduler-dm03 hash: dc2f9c30c2d972efbe2ce45cf611390e
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: ca24216d24f4ae1163e30bc0ab353715
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests278745616"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: 3def9f877b88fdbec114fd3304820a2c
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-dm03 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-controller-manager-dm03 hash: 6bacc4dd0c352be2b3c3b40e6a9f92c2
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-dm03 hash: dc2f9c30c2d972efbe2ce45cf611390e
Static pod: kube-scheduler-dm03 hash: 357b7cba3cee5370b9c9a360984db687
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

下载kubelet并重启

`[root@dm03 ~]# yum install kubelet-1.19.9-0`
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: hkg.mirror.rackspace.com
 * epel: hkg.mirror.rackspace.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> Running transaction check
---> Package kubelet.x86_64 0:1.19.2-0 will be updated
---> Package kubelet.x86_64 0:1.19.9-0 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================================================================================
 Package                           Arch                             Version                             Repository                            Size
===================================================================================================================================================
Updating:
 kubelet                           x86_64                           1.19.9-0                            kubernetes                            20 M

Transaction Summary
===================================================================================================================================================
Upgrade  1 Package

Total download size: 20 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
91b94430e5a7b65479ba816cf352514c857cc21bc4cd2c5019d76d62610c60ab-kubelet-1.19.9-0.x86_64.rpm                                |  20 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : kubelet-1.19.9-0.x86_64                                                                                                         1/2 
  Cleanup    : kubelet-1.19.2-0.x86_64                                                                                                         2/2 
  Verifying  : kubelet-1.19.9-0.x86_64                                                                                                         1/2 
  Verifying  : kubelet-1.19.2-0.x86_64                                                                                                         2/2 

Updated:
  kubelet.x86_64 0:1.19.9-0                                                                                                                        

Complete!
`[root@dm03 ~]# systemctl daemon-reload`
`[root@dm03 ~]# systemctl restart kubelet `

解除禁止调度

`[root@dm03 ~]# kubectl uncordon dm03`
node/dm03 uncordoned

最后查看所有节点已经升级完成

`[root@dm03 ~]# kubectl get nodes `
NAME   STATUS   ROLES    AGE    VERSION
dm01   Ready    master   181d   v1.19.9
dm02   Ready    master   181d   v1.19.9
dm03   Ready    master   181d   v1.19.9

升级工作节点

由于我这个环境中没有单独的node节点,所以没有写具体操作, node 节点的升级实际上在升级完 master 节点以后不需要什么特殊操作,node 节点唯一需要升级的就是 kubelet 组件;

1. 首先在 node 节点执行 `kubeadm upgrade node `命令,该命令会拉取集群内的 kubelet 配置文件。

2. 然后升级安装 kubelet 重启既可;

3. 同样升级node节点时不要忘记开启禁止调度,升级完成之后关闭禁止调度

官网说明