“ 本文正在参加「金石计划 . 瓜分6万现金大奖」 ”
1.kubeadm安装方式升级
升级k8s集群必须 先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证。
1.1 升级准备
在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级。
1.1.1 验证当前k8s master版本
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
1.1.2 验证当前k8s node版本
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 20h v1.20.14
k8s-master02.example.local Ready control-plane,master 20h v1.20.14
k8s-master03.example.local Ready control-plane,master 20h v1.20.14
k8s-node01.example.local Ready <none> 20h v1.20.14
k8s-node02.example.local Ready <none> 20h v1.20.14
k8s-node03.example.local Ready <none> 20h v1.20.14
1.2 升级k8s master节点版本
升级各k8s master节点版本
1.2.1 查看升级计划
[root@k8s-master01 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.14
[upgrade/versions] kubeadm version: v1.21.9
W0125 18:50:01.026004 119208 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0125 18:50:01.026067 119208 version.go:103] falling back to the local client version: v1.21.9
[upgrade/versions] Target version: v1.21.9
[upgrade/versions] Latest version in the v1.20 series: v1.20.15
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 6 x v1.20.14 v1.20.15
Upgrade to the latest version in the v1.20 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.20.14 v1.20.15
kube-controller-manager v1.20.14 v1.20.15
kube-scheduler v1.20.14 v1.20.15
kube-proxy v1.20.14 v1.20.15
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.15
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 6 x v1.20.14 v1.21.9
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.20.14 v1.21.9
kube-controller-manager v1.20.14 v1.21.9
kube-scheduler v1.20.14 v1.21.9
kube-proxy v1.20.14 v1.21.9
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.21.9
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
1.2.2 升级 k8s master节点版本
master01
#CentOS
[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | grep 1.20.*
Repository base is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
kubeadm.x86_64 1.20.0-0 kubernetes
kubeadm.x86_64 1.20.1-0 kubernetes
kubeadm.x86_64 1.20.2-0 kubernetes
kubeadm.x86_64 1.20.4-0 kubernetes
kubeadm.x86_64 1.20.5-0 kubernetes
kubeadm.x86_64 1.20.6-0 kubernetes
kubeadm.x86_64 1.20.7-0 kubernetes
kubeadm.x86_64 1.20.8-0 kubernetes
kubeadm.x86_64 1.20.9-0 kubernetes
kubeadm.x86_64 1.20.10-0 kubernetes
kubeadm.x86_64 1.20.11-0 kubernetes
kubeadm.x86_64 1.20.12-0 kubernetes
kubeadm.x86_64 1.20.13-0 kubernetes
kubeadm.x86_64 1.20.14-0 kubernetes
kubeadm.x86_64 1.20.15-0 kubernetes
#Ubuntu
root@k8s-master01:~# apt-cache madison kubeadm |grep 1.20.*
kubeadm | 1.20.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.13-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.12-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.11-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.10-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
#ha01和ha02上安装
#CentOS
[root@k8s-ha01 ~]# yum -y install socat
#Ubuntu
root@k8s-master01:~# apt -y install socat
#下线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS
[root@k8s-master01 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubuntu
root@k8s-master01:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version v1.20.15
k8s.gcr.io/kube-apiserver:v1.20.15
k8s.gcr.io/kube-controller-manager:v1.20.15
k8s.gcr.io/kube-scheduler:v1.20.15
k8s.gcr.io/kube-proxy:v1.20.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
[root@k8s-master01 ~]# cat download_kubeadm_images_1.20-2.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_kubeadm_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KUBEADM_VERSION=1.20.15
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/" '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Kubeadm镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Kubeadm镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_kubeadm_images_1.20-2.sh
[root@k8s-master01 ~]# kubeadm upgrade apply v1.20.15
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.15"
[upgrade/versions] Cluster version: v1.20.14
[upgrade/versions] kubeadm version: v1.20.15
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.15"...
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master01.example.local hash: 658c8782b9bd9c52da0b94be666192d6
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests401207376"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ce
Static pod: kube-apiserver-k8s-master01.example.local hash: 9b2053cdff6353cc35c3abf3a2e091b2
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589
Static pod: kube-controller-manager-k8s-master01.example.local hash: 4d05e725d6cdb548bee78744c22e0fb8
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53
Static pod: kube-scheduler-k8s-master01.example.local hash: aeca07c98134c1fa9650088b47670d3c
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.15". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart kubelet
#上线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
master02
#下线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS
[root@k8s-master02 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubuntu
root@k8s-master02:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master02 ~]# cat download_kubeadm_images_1.20-3.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_kubeadm_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KUBEADM_VERSION=1.20.15
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/" '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Kubeadm镜像"${END}
for i in ${images};do
docker pull ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Kubeadm镜像下载完成"${END}
}
images_download
[root@k8s-master02 ~]# bash download_kubeadm_images_1.20-3.sh
[root@k8s-master02 ~]# kubeadm upgrade apply v1.20.15
[root@k8s-master02 ~]# systemctl daemon-reload
[root@k8s-master02 ~]# systemctl restart kubelet
#上线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
master03
#下线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS
[root@k8s-master03 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubuntu
root@k8s-master03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master03 ~]# bash download_kubeadm_images_1.20-3.sh
[root@k8s-master03 ~]# kubeadm upgrade apply v1.20.15
[root@k8s-master03 ~]# systemctl daemon-reload
[root@k8s-master03 ~]# systemctl restart kubelet
#上线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.14
k8s-node02.example.local Ready <none> 4d v1.20.14
k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:27:39Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:23:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.20.15
1.3 升级calico
[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O
[root@k8s-master01 ~]# vim calico-etcd.yaml
...
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: OnDelete #修改这里,calico不会滚动更新,只有重启了kubelet,才会更新
template:
metadata:
labels:
k8s-app: calico-node
...
修改calico-etcd.yaml的以下位置
[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml
etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
# etcd-key: null
# etcd-cert: null
# etcd-ca: null
[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdzFyRE5lanVxbnVDRVBEa01xdjlmcVNWeHhmSzQvQ0wrY2Z3YnY5WHovSS9ScHZmCmNtRjdUM0wzYnBDNS9JMjVuVlZ6N2plb2JUZnN2RFhDWjVIQml1MkYrSzR1SHdna3dtREJYWjZzSUd6clNSS1oKRTdldWhxN1F3SWVqOWtTcTN2RjA5d1NsbWJJeFJaNTZmMXgvYXVMcnFpNUZnanBmWUhMWit3MVdBcUM3WWtVWgoraHoxN3pFRjVnaWlYWE9ucGR1c2VCZDZoVHd1T2JwNmg0clpEb0IrY1VMMXplOTNPTFJGbDBpL2ppRzFzMGw2CmlUaUgwU3d1SFlXQ1NXNWZ5NzBjVDFseUduWG5jNUFwOGRYZWt5dnZYRFBzNEVnVVcvbUczWlpoalAreC9DWTcKNGp6M2dRZVFHYitSMUdRb1djdlh4UGZneGsxK2IwK01ZUjFmVXdJREFRQUJBb0lCQUYwWTQrN05FdkFyNjlBbQovSmtwWGFUOHltUVc4cG11Q1FjQVFaU2tHelQrUFNscEh4TmpZV0I3YVc5SGlWclNMNkxMRm5Sd0VkUDYwdGJlCng4YVRyNmlGaVZMNXJ3RWE0R25Cc21Uck9SdzZ5K1lHOXV4dW5MMlNrZWt1dXZTaHhNeDZSVU55ODNoTGN5KzYKVnFaYmJsMkJ4czFUUDh6UUJLUHlGKytNYTNEVVV1RnhNZ0h3cnFOYVA5bzdOaW9ycTFzZTdHV2F1dHNXSW1HZwpLZjJDREU5SVlHbGQyd1pnWCtoVWhQOU1UcWVKVGJrQVJiUEVsS1BLUDBSeUxaNi9tcE42K0VONGovS0NHeW9PCmNmTzUrazlpUlpwYytldnhpelZkNXNyWER2azBlK1pyU3Y2eUhtV0hpaUxCMm9PN0hkMUVKbjhQa09scE1ISjcKU0hoTzBBRUNnWUVBN29DMFZicmVQNXhZRjUyalI2RUtDUkVLR0lWcExrTTBuNUpSRUdpNElXcWNyWEJ1eGhIYwpQWVgzdnJKRWM5Nm8va28rZTRVNk94cWFISjZRSGNVODFCRTcyc0d0RkY2WHlnNlRkSjRJd2ZIenpKNjlDK2JtCmRhSlNqbG1UeE9GOEhNSkpjdUt3RGRxOFlLNlRHZzN0MXJTcVNtczMzV1BxdG9zbW5Takp0cThDZ1lFQTBhK3kKTGxIWWl5U2NVSG0vY3hYWEZSVkNUWjJqaWo0MzF4MXNkNFE3amc0MlQ0dmNCUElsUmJGQjdRYjVLallZUjZkYQp2cGJMV0hLbE1OblUrM1dvcnJnNjNWNlZwa3lCck1VcnNSemlQaUIxM1lXVENsZjUwdDJERVZ5dS9aWDZPc2FuCjY4MDJwRFc0YnhKcmNPam9aM3BjUm9Fcy96N0RGKzArZStseWlwMENnWUVBdXR2WGJmdDBPUDR5L24yZytYT3cKT3g1QWZLbTVtR2RMQ1dKSFpNWEd6VmVMM1U3ald3ZVBPQnlIMTc0dloyQ2hvbWxrdnIzSXU1bkIrSDQ2aHppSwp5ZE9ldzJ0T1FWRkROeWxvV2N1ZkxPUjFrSEVseC9kbHcvQWpJaWdJWUE0UmdTNnZBUFdkM1p6c1RnczRjUWRNCnVoVGQvbVEyWnB2cnZvMFMrYnFGSHowQ2dZRUFnVnN3UXQ3L0JhZktQdU04dGxTczRUYkNObnVmWGpOUDQ0Y2wKV1AzY2Q2QlE1UFhVLzhBYU9rcEY3Mkd6Nk5TQ1dnSG1PMWx2ak5yOUNZdjRsa0JabFovVndLY1BEdzUzbVF2eQpEa3RSVHg1YldCT0ZTSVpKZWtwcEJ4YjBaVUJXcEZmVlUrUy9ac0kxUzJCRG85NHJNVnNNL2ZuR3RwZ1RadmxXCjZMNTFpUWtDZ1lBUkVRSElYTmhlYW1RSFE1TEpicEZzMFltSzRVZDluL2w1Vng1MEdUbG0vUEx5VlBWWU9TUWUKenYyYS96RHY2dVJ6ZGROU0tpSkFMVUJDZG5RSDRraklBWGg3NDBTQXNGUDZraW4zNm11RDB4RTlEODBOMlNyMgpDL3hQWHdINWp0Ry9jUkdHZGU4SGdjQTg4NkFKYkMyenlxYURpY3h1ejRQcll4Z2dPNG9iTmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURRVENDQWltZ0F3SUJBZ0lJR2VNdy9ETExMK2t3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakF4TWpZeE16TTFOVGRhRncweU16QXhNall4TXpNMU5UZGFNQmN4RlRBVApCZ05WQkFNVERHczRjeTF0WVhOMFpYSXdNVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTU5hd3pYbzdxcDdnaER3NURLci9YNmtsY2NYeXVQd2kvbkg4RzcvVjgveVAwYWIzM0poZTA5eTkyNlEKdWZ5TnVaMVZjKzQzcUcwMzdMdzF3bWVSd1lydGhmaXVMaDhJSk1KZ3dWMmVyQ0JzNjBrU21STzNyb2F1ME1DSApvL1pFcXQ3eGRQY0VwWm15TVVXZWVuOWNmMnJpNjZvdVJZSTZYMkJ5MmZzTlZnS2d1MkpGR2ZvYzllOHhCZVlJCm9sMXpwNlhickhnWGVvVThMam02ZW9lSzJRNkFmbkZDOWMzdmR6aTBSWmRJdjQ0aHRiTkplb2s0aDlFc0xoMkYKZ2tsdVg4dTlIRTlaY2hwMTUzT1FLZkhWM3BNcjcxd3o3T0JJRkZ2NWh0MldZWXovc2Z3bU8rSTg5NEVIa0JtLwprZFJrS0ZuTDE4VDM0TVpOZm05UGpHRWRYMU1DQXdFQUFhT0JsVENCa2pBT0JnTlZIUThCQWY4RUJBTUNCYUF3CkhRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQjhHQTFVZEl3UVlNQmFBRk1Ha3dwKy8KZGdBTjRHdnRDT0ZrbE5OSnBCSkNNRUFHQTFVZEVRUTVNRGVDREdzNGN5MXRZWE4wWlhJd01ZSUpiRzlqWVd4bwpiM04waHdTc0h3Tmxod1IvQUFBQmh4QUFBQUFBQUFBQUFBQUFBQUFBQUFBQk1BMEdDU3FHU0liM0RRRUJDd1VBCkE0SUJBUUFMOE53a0I5aUprc0RnOGp6dFRTTjB4U3pyaXQyK242M2QrV0dGS3l1K2d6Z2pTLzZaOXhYQkpZN3YKL2c1SEZrUnpxTmJXTDdoV0dtY1ZPUGJpQmNpZnJtcmpFNUFMdzhPNmZBVGg2V3RtaVN4RlRwa1Nhc3R5OW82RApJcGlmYzhSTS8rSS9EVWdTQXQ3ZzFucUJodjlxdnFSRWNiM1J1SmRYWTJjNi90LzNZb3gzTUFmVzNJaUVDNUorCkNTSXl2UUtmUDlBWVlXK2F4Y1dQelhXNzEwUVdNTnozZXVQMzJqZENkanBzbFVLNldpaHJQYjdnaURTdDdFVFYKWk5EeEh4NUp3WXlpYmFxbGQzQUlicFhNRmxnY2NubWttM0pwWnIrTUI4bGlGYThHZlU5L005N1ZueXFZN0huNgpDNkdXTWlJNWFvc0lGaE9INUJ3NFFNa0NzSXlvCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURFeU5qRXpNelUxTjFvWERUTXlNREV5TkRFek16VTFOMW93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5OcnNlMnd1aG45CnVKazdpOEZLN1A5SHU1YlIwOEY5R0VQTEtBelg0NHdKSE45QjZ1MTBrZmNoL0dlRTBremErQjhJejQ1UjRrdmUKbVpVYS9XbmVXdjNxRm1odURiTEdvU1B1Ykl1aXh1aTkwdllzbjBaRUJCV2dMbWdRNkdHNnd5OWFNbG55VGlWYworOTdSNTg2b3dMVGRTU3NiNjd2c0w0U2U0U2lXOHdTQTQ2K3FXSEJKNHc5Q2s2QXljam9vbDBMbXREVkJ1QlpqCjlNeWdDbUE4M3lkTnV4eUhDSGJpM2FRdkovVUNyQnoyNk5zYTVha1NlMlRQNGJ1US9PWjBIYnhsNUE5NXIyeGgKNkM1NGx3cHFLeTkxb2craWQ1ZlZMRFVVdDR0d1pvd0dITnZxMWRrRnI3VjA2SDJjdXo0eXlMajQ0a0xPNk9LMgo4OGplaWhBREhiY0NBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGTUdrd3ArL2RnQU40R3Z0Q09Ga2xOTkpwQkpDTUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQTI5SWlldmxmdk9MSmFCOGlNR3ZLTmVXMExGZkhkTGRoeUl3T2dDeThjaHNaTVR5SjhXdzFrUUVKUgozbTY0MGVHK2UvOTE0QmE5Wk5iL0NQMkN1eHA2MkQrVWl0U0FjS001NWtTNURuVEVrcURwbVdETjdLTjhISk1QCkcwdlRXYnNrVTVicXJqb0JVQVNPNUsxeDl4WENSUDU2elBVZ3E5QTY4SmM4N1Mya29PNk56Mm53ZE9zc042TW0KRzFNQmdHQ2lqQXB3MDZJM2NuT1ExcFFhVk1RNVovT0tDSEoyTFFFUFJISVZqb2E4clBBcmNyYXFGUnpPeTk4agpOc3FxcWYvNDhMamVwZDZvOFlZc08zRng2M3c2YmhaOG94WDFxT090WTRlQ0pPeWRTZkRMR21tYkMrc1ozZlJiCjU0RkVLQ1RWKzhqQjBYNmZJYjl2OHg3WU5MNFgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: "" # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: "" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12
# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml
- name: CALICO_IPV4POOL_CIDR
value: 192.168.0.0/12
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
image: docker.io/calico/cni:v3.21.4
image: docker.io/calico/pod2daemon-flexvol:v3.21.4
image: docker.io/calico/node:v3.21.4
image: docker.io/calico/kube-controllers:v3.21.4
下载calico镜像并上传harbor
[root@k8s-master01 ~]# cat download_calico_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_calico_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Calico镜像"${END}
for i in ${images};do
docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Calico镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_calico_images.sh
[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/kube-controllers:v3.21.4
[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
#下线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01
calico-node-q4dg7 1/1 Running 0 65m 172.31.3.101 k8s-master01 <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-q4dg7 -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
#镜像并没有升级
[root@k8s-master01 ~]# kubectl delete pod calico-node-q4dg7 -n kube-system
pod "calico-node-q4dg7" deleted
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01
calico-node-xngd8 0/1 PodInitializing 0 4s 172.31.3.101 k8s-master01 <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-xngd8 -n kube-system -o yaml |grep "image:"
[root@k8s-master01 ~]# kubectl get pod calico-node-xngd8 -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master01
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
#下线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master02
calico-node-v8nqp 1/1 Running 0 69m 172.31.3.102 k8s-master02.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-v8nqp -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-v8nqp -n kube-system
pod "calico-node-v8nqp" deleted
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master02
calico-node-n76qk 1/1 Running 0 27s 172.31.3.102 k8s-master02.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-n76qk -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master02
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
#下线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03
calico-node-4mdv6 1/1 Running 0 71m 172.31.3.103 k8s-master03.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-4mdv6 -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-4mdv6 -n kube-system
pod "calico-node-4mdv6" deleted
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03
calico-node-qr67n 0/1 Init:0/2 0 4s 172.31.3.103 k8s-master03.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-qr67n -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master03
[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
1.4 升级k8s node节点版本
[root@k8s-master01 ~]# kubectl drain k8s-node01.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready,SchedulingDisabled <none> 4d v1.20.14
k8s-node02.example.local Ready <none> 4d v1.20.14
k8s-node03.example.local Ready <none> 4d v1.20.14
#CentOS
[root@k8s-node01 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubuntu
root@k8s-node01:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01
kube-system calico-node-fzgq7 1/1 Running 0 46m 172.31.3.108 k8s-node01.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-fzgq7 -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-fzgq7 -n kube-system
pod "calico-node-fzgq7" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01
kube-system calico-node-dqk5p 0/1 Init:1/2 0 7s 172.31.3.108 k8s-node01.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-dqk5p -n kube-system -o yaml |grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node01.example.local
node/k8s-node01.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.15
k8s-node02.example.local Ready <none> 4d v1.20.14
k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubectl drain k8s-node02.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.15
k8s-node02.example.local Ready,SchedulingDisabled <none> 4d v1.20.14
k8s-node03.example.local Ready <none> 4d v1.20.14
#CentOS
[root@k8s-node02 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubuntu
root@k8s-node03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node02 ~]# systemctl daemon-reload
[root@k8s-node02 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1
kube-system calico-node-ktmc9 1/1 Running 0 48m 172.31.3.109 k8s-node02.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-ktmc9 -n kube-system -o yaml| grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-ktmc9 -n kube-system
pod "calico-node-ktmc9" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1
kube-system calico-node-p8czc 0/1 PodInitializing 0 8s 172.31.3.109 k8s-node02.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-p8czc -n kube-system -o yaml| grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node02.example.local
node/k8s-node02.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.15
k8s-node02.example.local Ready <none> 4d v1.20.15
k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubectl drain k8s-node03.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.15
k8s-node02.example.local Ready <none> 4d v1.20.15
k8s-node03.example.local Ready,SchedulingDisabled <none> 4d v1.20.14
#CentOS
[root@k8s-node03 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubuntu
root@k8s-node03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node03 ~]# systemctl daemon-reload
[root@k8s-node03 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1
kube-system calico-node-922s8 1/1 Running 0 51m 172.31.3.110 k8s-node03.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-922s8 -n kube-system -o yaml| grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-922s8 -n kube-system
pod "calico-node-922s8" deleted
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1
kube-system calico-node-j9f2s 0/1 Init:0/2 0 5s 172.31.3.110 k8s-node03.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-j9f2s -n kube-system -o yaml| grep "image:"
f:image: {}
f:image: {}
f:image: {}
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
- image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
image: harbor.raymonds.cc/google_containers/node:v3.21.4
image: harbor.raymonds.cc/google_containers/cni:v3.21.4
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node03.example.local
node/k8s-node03.example.local uncordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 4d v1.20.15
k8s-master02.example.local Ready control-plane,master 4d v1.20.15
k8s-master03.example.local Ready control-plane,master 4d v1.20.15
k8s-node01.example.local Ready <none> 4d v1.20.15
k8s-node02.example.local Ready <none> 4d v1.20.15
k8s-node03.example.local Ready <none> 4d v1.20.15
1.5 升级metrics-server
[root@k8s-master01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
#修改成下面内容
[root@k8s-master01 ~]# vim components.yaml
...
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls #添加这行
...
[root@k8s-master01 ~]# grep "image:" components.yaml
image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
[root@k8s-master01 ~]# cat download_metrics_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_metrics_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Metrics镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Metrics镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_metrics_images.sh
[root@k8s-master01 ~]# docker images |grep metrics
harbor.raymonds.cc/google_containers/metrics-server v0.5.2 f73640fb5061 8 weeks ago 64.3MB
[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml
[root@k8s-master01 ~]# grep "image:" components.yaml
image: harbor.raymonds.cc/google_containers/metrics-server:v0.5.2
[root@k8s-master01 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
查看状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-545b8b99c6-25csw 1/1 Running 0 45s
[root@k8s-master01 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01.example.local 152m 7% 1066Mi 27%
k8s-master02.example.local 136m 6% 1002Mi 26%
k8s-master03.example.local 143m 7% 1127Mi 29%
k8s-node01.example.local 65m 3% 651Mi 17%
k8s-node02.example.local 83m 4% 700Mi 18%
k8s-node03.example.local 76m 3% 666Mi 17%
1.6 升级dashboard
官方GitHub地址:github.com/kubernetes/…
可以在官方dashboard查看到最新版dashboard
root@k8s-master01:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
[root@k8s-master01 ~]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加这行
ports:
- port: 443
targetPort: 8443
nodePort: 30005 #添加这行
selector:
k8s-app: kubernetes-dashboard
...
[root@k8s-master01 ~]# grep "image:" recommended.yaml
image: kubernetesui/dashboard:v2.4.0
image: kubernetesui/metrics-scraper:v1.0.7
[root@k8s-master01 ~]# cat download_dashboard_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_dashboard_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' recommended.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Dashboard镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Dashboard镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_dashboard_images.sh
[root@k8s-master01 ~]# docker images |grep -E "(dashboard|metrics-scraper)"
harbor.raymonds.cc/kubernetesui/dashboard v2.4.0 72f07539ffb5 2 months ago 221MB
harbor.raymonds.cc/kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 7 months ago 34.4MB
[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' recommended.yaml
[root@k8s-master01 ~]# grep "image:" recommended.yaml
image: harbor.raymonds.cc/google_containers/dashboard:v2.4.0
image: harbor.raymonds.cc/google_containers/metrics-scraper:v1.0.7
[root@k8s-master01 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
创建管理员用户admin.yaml
[root@k8s-master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
[root@k8s-master01 ~]# kubectl apply -f admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-mlzc8
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 8e8d6838-f344-4701-85d3-21e39205a77c
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZMdGRxbV9rX1hsQ0dtT2J1dHlDd1lwQVJORnpKY21Yc0JKYlVXaGlfaG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW1semM4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4ZThkNjgzOC1mMzQ0LTQ3MDEtODVkMy0yMWUzOTIwNWE3N2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.dFe6Y-rRNEYWvVK-VNphz4N_tkCNHCG_uRt9iNhdCmtYcD5yy21iYcDjAWMVvmFuyn0QDnUlquPyl3WoASVc91BOKWNgdNkOrFEFKoP32YdgaurnRBkXMDgkAUJXQT-2vekO56UiQtoxK87DVSmFksTAFXlc7zw1VJRE1g10ZiNVTcl-omOiMPvdk5RIjs-Uk859p70_O1oC8Ep-JzBYWCilX2ymNUNNeh4lyt1Fo8Li4N0JLwzQLJgfHfjoQwpd4Irj2agMQ-BW4xT70HsJW4cUt1sJ29cnO1RfhxM8-w-6wBPnGwkJTSre4GfMrjnJoVFN2cbjQg4N0ud_MQMXcw