Kubeadm部署k8s集群v1.21.2(单Master节点)

1,197 阅读1分钟

Kubeadm部署k8s集群v1.21.2(单Master节点


本文主要记录采用kubeadm方式部署单master节点的K8S集群v1.21.2,部署为1master,2个node的k8s集群

一、集群部署前规划

主机操作系统IP
k8s-masterCentos7.8192.168.56.101
k8s-node1Centos7.8192.168.56.102
k8s-node2Centos7.8192.168.56.103

二、修改主机名和修改时区

#master节点(192.168.56.101):
hostnamectl set-hostname k8s-master

#node1节点(192.168.56.102):
hostnamectl set-hostname k8s-node1

#node2节点(192.168.56.103):
hostnamectl set-hostname k8s-node2

#修改时区(所有结点)
timedatectl  set-timezone Asia/Shanghai

三、进行基本配置(所有节点)

#修改/etc/hosts文件
cat >> /etc/hosts << EOF
192.168.56.101 k8s-master
192.168.56.102 k8s-node1
192.168.56.103 k8s-node2
EOF

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

#关闭swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

四、配置时间同步

  1. ####安装chrony(所有节点)
yum install -y chrony
  1. ####chrony配置(所有节点)
#重启chronyd服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

#开启网络时间同步功能
timedatectl set-ntp true
  1. ####配置后测试

所有节点执行chronyc sources命令,查看存在以^*开头的行,说明已经与服务器时间同步

五、修改iptables相关参数(所有节点)

#RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

cat <<EOF >  /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 使配置生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

六、加载ipvs相关模块(所有节点)

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块。在所有的Kubernetes节点执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

#继续执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

七、管理工具ipvsadm安装(所有节点)

yum install ipset ipvsadm -y

八、安装与启动Docker(所有节点)

  1. ####配置docker yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  1. ####安装指定版本,这里安装3:20.10.7-3.el7
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-3:20.10.7-3.el7.x86_64
  1. ####配置docker 镜像加速及修改cgroupdriver, 并指定私库下载地址为192.168.56.101:5000(可以将自己写的程序打成docker,然后推送到私库,供k8s下载部署使用)
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "insecure-registries": ["192.168.56.101:5000"],
  "registry-mirrors": ["https://qtytpky9.mirror.aliyuncs.com"]
}
EOF
  1. ####Docker默认镜像和容器的存储位置存储在/var/lib/docker,如果你的机器的这个目录磁盘太小,很容易导致磁盘爆满,因此可以在其他大的磁盘路径外链的方式来处理,如下(如不修改磁盘存储位置,此步骤不做),这里我将目录修改成/home/data/docker, 采用ln-s的方式。
## 采用ln -s方式挂载/home目录 
mkdir /home/data/docker -p
ln -s /home/data/docker /var/lib/docker
  1. ####启动docker
systemctl start docker && systemctl enable docker

##出现下面结果说明docker启动成功
[root@k8s-node2 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

[root@k8s-node1 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

[root@k8s-master ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

九、安装kubeadm、kubelet、kubectl(所有节点)

  1. ####配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. ####在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl
yum list kubeadm --showduplicates | sort -r
yum install -y kubelet-1.21.2-0 kubeadm-1.21.2-0 kubectl-1.21.2-0
  1. ####同样,kubelet相关的数据默认存储在/var/lib/kubelet,如果需要修改数据存储目录进行ln -s处理,如不修改磁盘存储位置,此步骤不做,我这里修改成了使用/home/data/kubelet目录来存储。
#修改kubelet数据目录
mkdir /home/data/kubelet -p
ln -s /home/data/kubelet /var/lib/kubelet
  1. ####将kubelet服务设置为自启动,但systemctl start kubelet不需要执行,k8s部署过程中会自动呼起。
systemctl enable kubelet

十、部署master节点(k8s-master节点, 192.168.56.101),要求最少要2核心

  1. ####Master节点执行初始化,通过image-repository参数修改指定从阿里云镜像下载相关镜像
kubeadm init \
    --apiserver-advertise-address=192.168.56.101 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.21.2 \
    --pod-network-cidr=10.244.0.0/16

初始化过程如下---->>>>

[root@k8s-master ~]# kubeadm init \
>     --apiserver-advertise-address=192.168.56.101 \
>     --image-repository registry.aliyuncs.com/google_containers \
>     --kubernetes-version v1.21.2 \
>     --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.0 not found: manifest unknown: manifest unknown
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

发现指定阿里云镜像后,初始化过程报错,failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0,找不到镜像,采用直接官网下载coredns:v1.8.0后再找tag的方法解决,如下>>>>

##下载oredns/coredns:1.8.0
docker pull coredns/coredns:1.8.0

##重新打tag成registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0

##删除下载的oredns/coredns:1.8.0
docker rmi coredns/coredns:1.8.0

##docker images查看
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.2    106ff58d4308   2 weeks ago     126MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.2    ae24db9aa2cc   2 weeks ago     120MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.2    f917b8c8f55b   2 weeks ago     50.6MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.2    a6ebd1c1ad98   2 weeks ago     131MB
registry.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   5 months ago    683kB
registry.aliyuncs.com/google_containers/coredns                   v1.8.0     296a6d5035e2   8 months ago    42.5MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   10 months ago   253MB

解决了registry.aliyuncs.com/google_containers/coredns:v1.8.0后,重新执行初始化命令,如下>>>>

kubeadm init \
    --apiserver-advertise-address=192.168.56.101 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.21.2 \
    --pod-network-cidr=10.244.0.0/16

结果如下,k8s-master初始化成功---->>>>

[root@k8s-master ~]# kubeadm init \
>     --apiserver-advertise-address=192.168.56.101 \
>     --image-repository registry.aliyuncs.com/google_containers \
>     --kubernetes-version v1.21.2 \
>     --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.003816 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ke30dh.l86n2xkuk5qrupj4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.101:6443 --token ke30dh.l86n2xkuk5qrupj4 \
        --discovery-token-ca-cert-hash sha256:44b9476594e0a59d439646c1312da06f237cee5ac9eef92b8d4e1b8f002ce6c9 

注意记录下初始化结果中的kubeadm join命令,部署worker节点时会用到。

初始化过程说明: [preflight] kubeadm 执行初始化前的检查。 [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml” [certificates] 生成相关的各种token和证书 [kubeconfig] 生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信 [control-plane] 安装 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。 [bootstraptoken] 生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到 [addons] 安装附加组件 kube-proxy 和 kube-dns。 Kubernetes Master 初始化成功,提示如何配置常规用户使用kubectl访问集群。 提示如何安装 Pod 网络。 提示如何注册其他节点到 Cluster。

十一、配置 kubectl(k8s-master节点, 192.168.56.101)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置后通过kubectl get pods --all-namespaces查看情况如下,发现两个coredns的pods是pending状态,这是因为还没有安装网络组件,接下来继续安装网络组件。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-59d64cd4d4-2jjdc             0/1     Pending   0          3m20s
kube-system   coredns-59d64cd4d4-55qp5             0/1     Pending   0          3m20s
kube-system   etcd-k8s-master                      1/1     Running   0          3m28s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          3m28s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          3m28s
kube-system   kube-proxy-ppwrq                     1/1     Running   0          3m20s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          3m28s

十二、部署网络插件(k8s-master节点, 192.168.56.101)

网络插件可以选择calico网络,也可以选择flannel,这里我们选用calico

  1. ####下载calico.yaml
wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
  1. ####修改calico.yaml,将所有image下加上一行imagePullPolicy: IfNotPresent,使得k8s在本地有了镜像后不再从网上下载。 在这里插入图片描述
  2. ####calico在多网络接口时自动检测到错误的网络接口,导致网络无法连通,通过指定网络接口(网卡名)解决问题

在calico.yaml中的

- name: CLUSTER_TYPE
  value: "k8s,bgp" 

下面增加两行

- name: IP_AUTODETECTION_METHOD
  value: "interface=enp0s3" 
  1. ####执行kubectl apply -f calico.yaml进行calica.yaml的部署
kubectl apply -f calica.yaml

在这里插入图片描述 报了一个warning提示,根据提示后将calica.yaml中的policy/v1beta1修改为policy/v1 在这里插入图片描述 修改成 在这里插入图片描述 然后重新应用

kubectl apply -f calica.yaml

结果如下: 在这里插入图片描述 5. ####再次进行kubectl get pods --all-namespaces查看,

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-78d6f96c7b-cfg4l   0/1     ContainerCreating   0          5m7s
kube-system   calico-node-v8696                          0/1     PodInitializing     0          5m8s
kube-system   coredns-59d64cd4d4-2jjdc                   0/1     ContainerCreating   0          21m
kube-system   coredns-59d64cd4d4-55qp5                   0/1     ContainerCreating   0          21m
kube-system   etcd-k8s-master                            1/1     Running             0          21m
kube-system   kube-apiserver-k8s-master                  1/1     Running             0          21m
kube-system   kube-controller-manager-k8s-master         1/1     Running             0          21m
kube-system   kube-proxy-ppwrq                           1/1     Running             0          21m
kube-system   kube-scheduler-k8s-master                  1/1     Running             0          21m

说明正在下载相关镜像了,请耐心等待,一定时间后再查看,已正常,全部为running

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-78d6f96c7b-cfg4l   1/1     Running   0          10m
kube-system   calico-node-v8696                          1/1     Running   0          10m
kube-system   coredns-59d64cd4d4-2jjdc                   1/1     Running   0          26m
kube-system   coredns-59d64cd4d4-55qp5                   1/1     Running   0          26m
kube-system   etcd-k8s-master                            1/1     Running   0          26m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          26m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          26m
kube-system   kube-proxy-ppwrq                           1/1     Running   0          26m
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          26m

至此,k8s-master已部署完成,接下来部署2个node节点

十三、部署node节点(k8s-node1,k8s-node2节点, 192.168.56.102,192.168.56.103)

在k8s-node1和k8s-node2上执行kubeadm join即可,命令在上面的k8s-master init成功的结果日志里面找。

[root@k8s-node1 ~]# kubeadm join 192.168.56.101:6443 --token ke30dh.l86n2xkuk5qrupj4 \
>         --discovery-token-ca-cert-hash sha256:44b9476594e0a59d439646c1312da06f237cee5ac9eef92b8d4e1b8f002ce6c9 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node2 ~]# kubeadm join 192.168.56.101:6443 --token ke30dh.l86n2xkuk5qrupj4 \
>         --discovery-token-ca-cert-hash sha256:44b9476594e0a59d439646c1312da06f237cee5ac9eef92b8d4e1b8f002ce6c9 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

十四、集群部署组件查看

至少,集群已初步部署完成,通过kubectl get pods --all-namespaces 命令进行检查。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-78d6f96c7b-cfg4l   1/1     Running    0          22m
kube-system   calico-node-884h8                          0/1     Init:0/3   0          4m25s
kube-system   calico-node-8s5tq                          0/1     Init:2/3   0          4m22s
kube-system   calico-node-v8696                          1/1     Running    0          22m
kube-system   coredns-59d64cd4d4-2jjdc                   1/1     Running    0          38m
kube-system   coredns-59d64cd4d4-55qp5                   1/1     Running    0          38m
kube-system   etcd-k8s-master                            1/1     Running    0          38m
kube-system   kube-apiserver-k8s-master                  1/1     Running    0          38m
kube-system   kube-controller-manager-k8s-master         1/1     Running    0          38m
kube-system   kube-proxy-d2vpx                           1/1     Running    0          4m25s
kube-system   kube-proxy-ppwrq                           1/1     Running    0          38m
kube-system   kube-proxy-tkgm8                           1/1     Running    0          4m22s
kube-system   kube-scheduler-k8s-master                  1/1     Running    0          38m

发现仍有2个calico的pod不是running状态,无需理会,因为正在下载镜像,等一定的时间后,继续查看情况,发现已全部变成running,至此,集群所有组件完全正常可用。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-78d6f96c7b-cfg4l   1/1     Running   0          47m
kube-system   calico-node-7lqg4                          1/1     Running   0          4m18s
kube-system   calico-node-df5qv                          1/1     Running   0          4m18s
kube-system   calico-node-mswc7                          1/1     Running   0          4m18s
kube-system   coredns-59d64cd4d4-2jjdc                   1/1     Running   0          62m
kube-system   coredns-59d64cd4d4-55qp5                   1/1     Running   0          62m
kube-system   etcd-k8s-master                            1/1     Running   0          63m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          63m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          63m
kube-system   kube-proxy-d2vpx                           1/1     Running   0          29m
kube-system   kube-proxy-ppwrq                           1/1     Running   0          62m
kube-system   kube-proxy-tkgm8                           1/1     Running   0          29m
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          63m

至此,整个集群搭建完毕。

其他补充

一、移除节点

  1. 以移除k8s-node2节点为例,在Master节点上运行:
kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node2

移除前:

[root@k8s-master ~]# kubectl get nodes --all-namespaces
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   67m   v1.21.2
k8s-node1    Ready    <none>                 33m   v1.21.2
k8s-node2    Ready    <none>                 33m   v1.21.2

移除后:

[root@k8s-master ~]# kubectl get nodes --all-namespaces
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   68m   v1.21.2
k8s-node1    Ready    <none>                 34m   v1.21.2

最后在k8s-node2节点上执行,清除

kubeadm reset -f

二、忘记Token加入Node

  1. ###生成一条永久有效的token, 如已有token有效可以不执行这一步
kubeadm token create --ttl 0
  1. ###kubeadm token list
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ke30dh.l86n2xkuk5qrupj4   22h         2021-07-06T12:59:17+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
  1. ###获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

#结果如下:
44b9476594e0a59d439646c1312da06f237cee5ac9eef92b8d4e1b8f002ce6c9
  1. ####最后执行kubeadm join
kubeadm join 192.168.56.101:6443 --token ke30dh.l86n2xkuk5qrupj4 --discovery-token-ca-cert-hash sha256:44b9476594e0a59d439646c1312da06f237cee5ac9eef92b8d4e1b8f002ce6c9

三、设置k8s能调度到master节点

K8s默认是只能部署Pod在node节点上,如果master资源足或你希望能将pod也在master上进行部署,执行以下操作进行。

  1. ###设置pod能部署在k8s-master节点
kubectl taint node k8s-master node-role.kubernetes.io/master-

结果如下:
[root@k8s-master ~]# kubectl taint node k8s-master node-role.kubernetes.io/master-
node/k8s-master untainted
  1. ###假如想取消pod能部署在k8s-master节点
kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule

结果如下:
[root@k8s-master ~]# kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule
node/k8s-master tainted

四、kube-proxy开启ipvs

关于K8S中iptables和ipvs区别详见此文:K8S中iptables和ipvs区别

k8s默认是使用iptables模式进行服务,如果需要开始ipvs模式,执行以下:

#修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:
kubectl edit cm kube-proxy -n kube-system

#之后重启各个节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

#查看日志
kubectl logs kube-proxy-2696f -n kube-system
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

在这里插入图片描述