- docker
- harbor
- 03 K8S 安装准备
零、IP规划
| Pod容器实例IP | 集群内ServiceIP |
|---|
| master | --pod-network-cidr=172.30.0.0/16 | --service-cidr=10.254.0.0/16 |
| node1 | | |
| node2 | | |
一、kubeadm安装
1.1、kubeadm/kubctl/kubelet 1.22.2 安装[master/node1/node2]
- kubeadm:k8s集群的一键部署工具,通过把k8s的各类核心组件和插件以pod的方式部署来简化安装过程
- kubelet:运行在每个节点上的node agent,k8s集群通过kubelet真正的去操作每个节点上的容器,由于需要直接操作宿主机的各类资源
- 所以没有放在pod里面,还是通过服务的形式装在系统里面
- kubectl:kubernetes的命令行工具,通过连接api-server完成对于k8s的各类操作
- kubernetes-cni:k8s的虚拟网络设备,通过在宿主机上虚拟一个cni0网桥,来完成pod之间的网络通讯,作用和docker0类似
[root@VM-16-14-centos ~]
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@VM-16-6-centos ~]
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@VM-16-4-centos ~]
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@VM-16-4-centos ~]
Repository epel is listed more than once in the configuration
上次元数据过期检查:0:00:18 前,执行于 2021年10月24日 星期日 13时26分36秒。
可安装的软件包
名称 : kubeadm
版本 : 1.22.2
发布 : 0
架构 : x86_64
大小 : 9.3 M
源 : kubelet-1.22.2-0.src.rpm
仓库 : kubernetes
概况 : Command-line utility for administering a Kubernetes cluster.
URL : https://kubernetes.io
协议 : ASL 2.0
描述 : Command-line utility for administering a Kubernetes cluster.
[root@VM-16-4-centos ~]
Repository epel is listed more than once in the configuration
上次元数据过期检查:0:00:38 前,执行于 2021年10月24日 星期日 13时26分36秒。
可安装的软件包
名称 : kubelet
版本 : 1.22.2
发布 : 0
架构 : x86_64
大小 : 23 M
源 : kubelet-1.22.2-0.src.rpm
仓库 : kubernetes
概况 : Container cluster management
URL : https://kubernetes.io
协议 : ASL 2.0
描述 : The node agent of Kubernetes, the container cluster manager.
[root@VM-16-4-centos ~]
Repository epel is listed more than once in the configuration
上次元数据过期检查:0:00:45 前,执行于 2021年10月24日 星期日 13时26分36秒。
可安装的软件包
名称 : kubectl
版本 : 1.22.2
发布 : 0
架构 : x86_64
大小 : 9.6 M
源 : kubelet-1.22.2-0.src.rpm
仓库 : kubernetes
概况 : Command-line utility for interacting with a Kubernetes cluster.
URL : https://kubernetes.io
协议 : ASL 2.0
描述 : Command-line utility for interacting with a Kubernetes cluster.
➜ ~ ansible master,node1,node2 -m command -a "yum install -y kubelet kubeadm kubectl"
➜ ~ ansible master,node1,node2 -m command -a "systemctl enable kubelet"
node1 | CHANGED | rc=0 >>
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
node2 | CHANGED | rc=0 >>
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
master | CHANGED | rc=0 >>
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
➜ ~ ansible master,node1,node2 -m command -a "systemctl start kubelet"
node2 | CHANGED | rc=0 >>
node1 | CHANGED | rc=0 >>
master | CHANGED | rc=0 >>
➜ ~ ansible master,node1,node2 -m command -a "shutdown -r now"
- 已安装:
- cri-tools-1.13.0-0.x86_64
- kubeadm-1.22.2-0.x86_64
- kubectl-1.22.2-0.x86_64
- kubelet-1.22.2-0.x86_64
- kubernetes-cni-0.8.7-0.x86_64
二、镜像准备[master/node1/node2]
[root@VM-16-14-centos ~]
k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
[root@VM-16-14-centos ~]
[root@VM-16-14-centos data]
[root@VM-16-14-centos data]
KUBE_VERSION=v1.22.2
PAUSE_VERSION=3.5
CORE_DNS_VERSION=1.8.4
CORE_DNS_VVERSION=v1.8.4
ETCD_VERSION=3.5.0-0
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull coredns/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag coredns/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns/coredns:$CORE_DNS_VVERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi coredns/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@VM-16-14-centos data]
[root@VM-16-14-centos data]
k8s.gcr.io/kube-apiserver v1.22.2 e64579b7d886 5 weeks ago 128MB
k8s.gcr.io/kube-controller-manager v1.22.2 5425bcbd23c5 5 weeks ago 122MB
k8s.gcr.io/kube-proxy v1.22.2 873127efbc8a 5 weeks ago 104MB
k8s.gcr.io/kube-scheduler v1.22.2 b51ddc1014b0 5 weeks ago 52.7MB
k8s.gcr.io/etcd 3.5.0-0 004811815584 4 months ago 295MB
k8s.gcr.io/coredns/coredns v1.8.4 8d147537fb7d 4 months ago 47.6MB
k8s.gcr.io/pause 3.5 ed210e3e4a5b 7 months ago 683kB
[root@VM-16-14-centos data]
- 在node1/node2节点执行的意义在于,如果pod镜像无法拉取,后续 节点加入成功后状态无法ready[pod状态异常]
[root@VM-16-14-centos data]# scp kubeadm_config_images_list.sh root@node1:/data/
[root@VM-16-14-centos data]# scp kubeadm_config_images_list.sh root@node2:/data/
[root@VM-16-6-centos data]# ./kubeadm_config_images_list.sh
[root@VM-16-4-centos data]# ./kubeadm_config_images_list.sh
三、创建master节点[master]
[root@VM-16-14-centos data]# kubeadm init \
> --apiserver-advertise-address=0.0.0.0 \
> --apiserver-bind-port=6443 \
> --kubernetes-version=v1.22.2 \
> --pod-network-cidr=172.30.0.0/16 \
> --service-cidr=10.254.0.0/16 \
> --image-repository=k8s.gcr.io \
> --ignore-preflight-errors=swap \
> --token-ttl=0
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm-16-14-centos] and IPs [10.254.0.1 10.206.16.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm-16-14-centos] and IPs [10.206.16.14 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm-16-14-centos] and IPs [10.206.16.14 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002716 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm-16-14-centos as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vm-16-14-centos as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uucwxx.9yq2odrhblb62e82
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.206.16.14:6443 --token uucwxx.9yq2odrhblb62e82 \
--discovery-token-ca-cert-hash sha256:652734c803d12a3f11c3fe0f5890cbb340a0ca9d198fa1d34bad76c97fc92993
[root@VM-16-14-centos data]#
[root@VM-16-14-centos data]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
uucwxx.9yq2odrhblb62e82 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
[root@VM-16-14-centos data]# mkdir -p $HOME/.kube
[root@VM-16-14-centos data]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@VM-16-14-centos data]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-mz7mv 0/1 Pending 0 2m3s
coredns-78fcd69978-qspg4 0/1 Pending 0 2m3s
etcd-vm-16-14-centos 1/1 Running 0 2m17s
kube-apiserver-vm-16-14-centos 1/1 Running 0 2m17s
kube-controller-manager-vm-16-14-centos 1/1 Running 0 2m17s
kube-proxy-nvqvg 1/1 Running 0 2m3s
kube-scheduler-vm-16-14-centos 1/1 Running 0 2m17s
[root@VM-16-14-centos data]#
# kubectl get nodes,這裡要等 kubeadmin init後,才能不報錯。
[root@VM-16-14-centos data]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-16-14-centos Ready control-plane,master 60m v1.22.2
vm-16-4-centos NotReady <none> 8m3s v1.22.2
vm-16-6-centos NotReady <none> 13m v1.22.2
[root@VM-16-14-centos data]# kubectl get pod
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78fcd69978-mz7mv 1/1 Running 0 60m 172.30.0.2 vm-16-14-centos <none> <none>
kube-system coredns-78fcd69978-qspg4 1/1 Running 0 60m 172.30.0.3 vm-16-14-centos <none> <none>
kube-system etcd-vm-16-14-centos 1/1 Running 0 60m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-apiserver-vm-16-14-centos 1/1 Running 0 60m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-controller-manager-vm-16-14-centos 1/1 Running 0 60m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-flannel-ds-cljlp 0/1 Init:0/2 0 13m 10.206.16.6 vm-16-6-centos <none> <none>
kube-system kube-flannel-ds-kx55s 1/1 Running 0 53m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-flannel-ds-qs42h 0/1 Init:0/2 0 8m8s 10.206.16.4 vm-16-4-centos <none> <none>
kube-system kube-proxy-7nxvd 0/1 ContainerCreating 0 8m8s 10.206.16.4 vm-16-4-centos <none> <none>
kube-system kube-proxy-nvqvg 1/1 Running 0 60m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-proxy-t2vnp 0/1 ContainerCreating 0 13m 10.206.16.6 vm-16-6-centos <none> <none>
kube-system kube-scheduler-vm-16-14-centos 1/1 Running 0 60m 10.206.16.14 vm-16-14-centos <none> <none>
四、部署网络组件
[root@VM-16-14-centos data]
[root@VM-16-14-centos flannel]
[root@VM-16-14-centos flannel]
"Network": "10.244.0.0/16",
修改为
"Network": "172.30.0.0/16",
[root@VM-16-14-centos flannel]
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@VM-16-14-centos flannel]
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-mz7mv 0/1 Pending 0 6m55s
coredns-78fcd69978-qspg4 0/1 Pending 0 6m55s
etcd-vm-16-14-centos 1/1 Running 0 7m9s
kube-apiserver-vm-16-14-centos 1/1 Running 0 7m9s
kube-controller-manager-vm-16-14-centos 1/1 Running 0 7m9s
kube-flannel-ds-kx55s 1/1 Running 0 34s
kube-proxy-nvqvg 1/1 Running 0 6m55s
kube-scheduler-vm-16-14-centos 1/1 Running 0 7m9s
[root@VM-16-14-centos flannel]
[root@VM-16-14-centos flannel]
NAME STATUS ROLES AGE VERSION
vm-16-14-centos Ready control-plane,master 7m59s v1.22.2
[root@VM-16-14-centos flannel]
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-mz7mv 1/1 Running 0 7m57s
coredns-78fcd69978-qspg4 1/1 Running 0 7m57s
etcd-vm-16-14-centos 1/1 Running 0 8m11s
kube-apiserver-vm-16-14-centos 1/1 Running 0 8m11s
kube-controller-manager-vm-16-14-centos 1/1 Running 0 8m11s
kube-flannel-ds-kx55s 1/1 Running 0 96s
kube-proxy-nvqvg 1/1 Running 0 7m57s
kube-scheduler-vm-16-14-centos 1/1 Running 0 8m11s
五、创建node节点[node1/node2]
5.1、加入时报错1
[root@VM-16-6-centos ~]# kubeadm join 10.206.16.14:6443
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
^@[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
先删除已经存在的配置文件
[root@VM-16-6-centos ~]# rm -rf /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt
[root@VM-16-6-centos ~]# kubeadm join 10.206.16.14:6443
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable
[ERROR FileAvailable
[preflight] If you know what you are doing, you can make a check non-fatal with `
To see the stack trace of this error execute with
5.2、加入时报错2 Cgroup Driver、并解决
[root@VM-16-6-centos ~]
报错
Oct 24 17:27:44 VM-16-6-centos kubelet[50195]: E1024 17:27:44.457318 50195 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Oct 24 17:27:44 VM-16-6-centos systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct 24 17:27:44 VM-16-6-centos systemd[1]: kubelet.service: Failed with result 'exit-code'.
[root@VM-16-6-centos ~]# docker info|grep "Cgroup Driver"
Cgroup Driver: cgroupfs
[root@VM-16-6-centos ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:
修改为
ExecStart=/usr/bin/dockerd -H fd:
[root@VM-16-6-centos ~]# systemctl daemon-reload
[root@VM-16-6-centos ~]# systemctl restart docker
[root@VM-16-6-centos ~]# systemctl restart kubelet
[root@VM-16-6-centos ~]# docker info|grep "Cgroup Driver"
Cgroup Driver: systemd
[root@VM-16-6-centos ~]# kubeadm join 10.206.16.14:6443 --token uucwxx.9yq2odrhblb62e82 --ignore-preflight-errors=Swap --discovery-token-ca-cert-hash sha256:652734c803d12a3f11c3fe0f5890cbb340a0ca9d198fa1d34bad76c97fc92993
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@VM-16-6-centos ~]#
[root@VM-16-14-centos flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-16-14-centos Ready control-plane,master 50m v1.22.2
vm-16-6-centos NotReady <none> 3m14s v1.22.2
5.3、node2继续加入
[root@VM-16-4-centos ~]# docker info|grep "Cgroup Driver"
Cgroup Driver: cgroupfs
[root@VM-16-4-centos ~]# vim /lib/systemd/system/docker.service
[root@VM-16-4-centos ~]# systemctl daemon-reload
[root@VM-16-4-centos ~]# systemctl restart docker
[root@VM-16-4-centos ~]# docker info|grep "Cgroup Driver"
Cgroup Driver: systemd
[root@VM-16-4-centos ~]# kubeadm join 10.206.16.14:6443 --token uucwxx.9yq2odrhblb62e82 --ignore-preflight-errors=Swap --discovery-token-ca-cert-hash sha256:652734c803d12a3f11c3fe0f5890cbb340a0ca9d198fa1d34bad76c97fc92993
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@VM-16-4-centos ~]#
[root@VM-16-14-centos flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-16-14-centos Ready control-plane,master 53m v1.22.2
vm-16-4-centos NotReady <none> 59s v1.22.2
vm-16-6-centos NotReady <none> 6m18s v1.22.2
[root@VM-16-14-centos flannel]#
六、集群状态
- 公有云环境,node阶段收敛至ready,大约20分钟
[root@VM-16-14-centos data]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-16-14-centos Ready control-plane,master 71m v1.22.2
vm-16-4-centos Ready <none> 19m v1.22.2
vm-16-6-centos Ready <none> 24m v1.22.2
[root@VM-16-14-centos data]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78fcd69978-mz7mv 1/1 Running 0 71m 172.30.0.2 vm-16-14-centos <none> <none>
kube-system coredns-78fcd69978-qspg4 1/1 Running 0 71m 172.30.0.3 vm-16-14-centos <none> <none>
kube-system etcd-vm-16-14-centos 1/1 Running 0 71m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-apiserver-vm-16-14-centos 1/1 Running 0 71m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-controller-manager-vm-16-14-centos 1/1 Running 0 71m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-flannel-ds-cljlp 1/1 Running 0 24m 10.206.16.6 vm-16-6-centos <none> <none>
kube-system kube-flannel-ds-kx55s 1/1 Running 0 64m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-flannel-ds-qs42h 1/1 Running 0 19m 10.206.16.4 vm-16-4-centos <none> <none>
kube-system kube-proxy-7nxvd 1/1 Running 1 (2m2s ago) 19m 10.206.16.4 vm-16-4-centos <none> <none>
kube-system kube-proxy-nvqvg 1/1 Running 0 71m 10.206.16.14 vm-16-14-centos <none> <none>
kube-system kube-proxy-t2vnp 1/1 Running 0 24m 10.206.16.6 vm-16-6-centos <none> <none>
kube-system kube-scheduler-vm-16-14-centos 1/1 Running 0 71m 10.206.16.14 vm-16-14-centos <none> <none>