使用工具kubeadm安装kubeadm-1.26.0版本集群

523 阅读1分钟

参考文档: blog.frognew.com/2023/01/kub…

机器准备

至少准备两台机器,一台master,一台worker

角色主机名简写配置
mastertest01v操作系统: Linux(centos7, 其它操作系统也可, 安装过程类似, 可参考官方文档)
机器配置: CPU >= 2, 内存 >= 2
workertest02v操作系统: Linux(centos7, 其它操作系统也可, 安装过程类似, 可参考官方文档)
机器配置: CPU >= 2, 内存 >= 2

本文所用的系统是: Linux ${test01v_hostname} 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/LinuxC

系统及依赖配置

以下操作在所有节点执行

系统依赖设置

修改hostname

[root@test01v ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
${test01v_ip} ${test01v_hostname}
${test02v_ip} ${test01v_hostname}

[root@test02v ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
${test01v_ip} ${test01v_hostname}
${test02v_ip} ${test01v_hostname}

确认MAC和product_uuid的唯一性

[root@test01v ~]# ifconfig -a    # 查看MAC
[root@test01v ~]# cat /sys/class/dmi/id/product_uuid	# 查看product_uuid

防火墙

[root@test01v ~]# systemctl stop firewalld.service
[root@test01v ~]# systemctl disable firewalld.service
[root@test01v ~]# firewall-cmd --state

禁用SELinux

修改/etc/selinux/config, 设置SELINUX=disabled. 重启机器.
[root@test01v ~]# sestatus
SELinux status:                 disabled 

禁用交换内存

编辑/etc/fstab, 将swap注释掉. 重启机器.
[root@test01v ~]# vim /etc/fstab 
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

[root@test01v ~]# swapoff -a

containerd设置

创建containerd.conf配置文件

#创建文件
[root@test01v ~]# cat << EOF > /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF

#生效
[root@test01v ~]# modprobe overlay
[root@test01v ~]# modprobe br_netfilter

kubernetes-cri设置

注:对 Linux 内核参数进行调整和优化,以满足 k8s 集群和容器运行时环境

创建kubernetes-cri.conf配置文件

#创建文件
[root@test01v ~]# cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-ip6tables = 1  #在 k8s 集群中,网络策略和流量控制依赖于 `iptables` 规则,这两个参数控制是否让 Linux 内核的网桥模块在数据包
> net.bridge.bridge-nf-call-iptables = 1   #经过网桥时调用 `iptables` 或 `ip6tables` 规则,保证网络策略和安全组规则的生效
> net.ipv4.ip_forward = 1                  #开启 IPv4 数据包转发功能,使k8s集群中节点能够在不同得端口间转发数据包,保证网络正常通信
> user.max_user_namespaces=28633
> EOF

#生效
[root@test01v ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces = 28633

配置服务器支持开启ipvs的前提条件

如果不配置,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

[root@test01v ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF

[root@test01v ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  2 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          137239  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm

[root@test01v ~]# yum install -y ipset ipvsadm

输出如下:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ipset.x86_64 0:6.29-1.el7 will be updated
---> Package ipset.x86_64 0:7.1-1.el7 will be an update
--> Processing Dependency: ipset-libs(x86-64) = 7.1-1.el7 for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13(LIBIPSET_4.8)(64bit) for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13(LIBIPSET_2.0)(64bit) for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13()(64bit) for package: ipset-7.1-1.el7.x86_64
---> Package ipvsadm.x86_64 0:1.27-8.el7 will be installed
--> Running transaction check
---> Package ipset-libs.x86_64 0:6.29-1.el7 will be updated
---> Package ipset-libs.x86_64 0:7.1-1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================
 Package            Arch           Version               Repository    Size
============================================================================
Installing:
 ipvsadm            x86_64         1.27-8.el7            base          45 k
Updating:
 ipset              x86_64         7.1-1.el7             base          39 k
Updating for dependencies:
 ipset-libs         x86_64         7.1-1.el7             base          64 k

Transaction Summary
============================================================================
Install  1 Package
Upgrade  1 Package (+1 Dependent package)

Total download size: 147 k
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/3): ipset-7.1-1.el7.x86_64.rpm                      |  39 kB   00:00     
(2/3): ipset-libs-7.1-1.el7.x86_64.rpm                 |  64 kB   00:00     
(3/3): ipvsadm-1.27-8.el7.x86_64.rpm                   |  45 kB   00:00     
----------------------------------------------------------------------------
Total                                          695 kB/s | 147 kB  00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : ipset-libs-7.1-1.el7.x86_64                                                            1/5 
  Updating   : ipset-7.1-1.el7.x86_64                                                                 2/5 
  Installing : ipvsadm-1.27-8.el7.x86_64                                                              3/5 
  Cleanup    : ipset-6.29-1.el7.x86_64                                                                4/5 
  Cleanup    : ipset-libs-6.29-1.el7.x86_64                                                           5/5 
  Verifying  : ipvsadm-1.27-8.el7.x86_64                                                              1/5 
  Verifying  : ipset-7.1-1.el7.x86_64                                                                 2/5 
  Verifying  : ipset-libs-7.1-1.el7.x86_64                                                            3/5 
  Verifying  : ipset-libs-6.29-1.el7.x86_64                                                           4/5 
  Verifying  : ipset-6.29-1.el7.x86_64                                                                5/5 

Installed:
  ipvsadm.x86_64 0:1.27-8.el7                                                                             

Updated:
  ipset.x86_64 0:7.1-1.el7                                                                                

Dependency Updated:
  ipset-libs.x86_64 0:7.1-1.el7                                                                           

Complete!

部署容器运行时Containerd

下载并安装
cri-containerd-cni-1.6.14-linux-amd64.tar.gz压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。 下载并解压缩到系统的根目录/中:

[root@test01v ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.14/cri-containerd-cni-1.6.14-linux-amd64.tar.gz
[root@test01v ~]# tar -zxvf cri-containerd-cni-1.6.14-linux-amd64.tar.gz -C /
输出如下:
etc/
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
etc/crictl.yaml
usr/
usr/local/
usr/local/sbin/
usr/local/sbin/runc
usr/local/bin/
usr/local/bin/containerd-stress
usr/local/bin/containerd-shim
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/crictl
usr/local/bin/critest
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/ctd-decoder
usr/local/bin/containerd
usr/local/bin/ctr
opt/
opt/cni/
opt/cni/bin/
opt/cni/bin/ptp
opt/cni/bin/bandwidth
opt/cni/bin/static
opt/cni/bin/dhcp
opt/cni/bin/tuning
opt/cni/bin/sbr
opt/cni/bin/macvlan
opt/cni/bin/firewall
opt/cni/bin/host-device
opt/cni/bin/ipvlan
opt/cni/bin/vlan
opt/cni/bin/bridge
opt/cni/bin/portmap
opt/cni/bin/vrf
opt/cni/bin/host-local
opt/cni/bin/loopback
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/version
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/gce/env

注意经测试cri-containerd-cni-1.6.4-linux-amd64.tar.gz包中包含的runc在CentOS 7下的动态链接有问题,这里从runc的github上单独下载runc,并替换上面安装的containerd中的runc

[root@test01v ~]# wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64

[root@test01v ~]# rm -f /usr/local/sbin/runc
[root@test01v ~]# mv runc.amd64 /usr/local/sbin/runc
[root@test01v ~]# chown root:root /usr/local/sbin/runc
[root@test01v ~]# chmod 755 /usr/local/sbin/runc

做containerd的软链

[root@test01v ~]# ln -snf /usr/local/sbin/* /bin/
[root@test01v ~]# ln -snf /usr/local/bin/* /bin/

配置containerd文件

[root@test01v ~]# mkdir -p /etc/containerd
[root@test01v ~]# containerd config default > /etc/containerd/config.toml

修改/etc/containerd/config.toml
1). 将sandbox_image = "registry.k8s.io/pause:3.6" 修改为 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
2). SystemdCgroup = false 修改为 SystemdCgroup = true

启动

配置containerd开机启动,并restart
[root@test06v ~]# systemctl enable containerd --now
[root@test06v ~]# systemctl restart containerd

测试

使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:
[root@test06v ~]# crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.6.14
RuntimeApiVersion:  v1

使用kubeadm部署Kubernetes

安装kubeadm和kubelet

这步所有节点都要执行

[root@test01v ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
>         http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

[root@test01v ~]# yum makecache fast

输出如下:
Loaded plugins: fastestmirror
ADDOPS-base                                                                                                                          | 2.9 kB  00:00:00     
base                                                                                                                                 | 2.9 kB  00:00:00     
centosplus                                                                                                                           | 2.9 kB  00:00:00     
docker-ce-stable                                                                                                                     | 3.5 kB  00:00:00     
epel                                                                                                                                 | 2.9 kB  00:00:00     
extras                                                                                                                               | 2.9 kB  00:00:00     
kubernetes                                                                                                                           | 1.4 kB  00:00:00     
puppetlabs-deps                                                                                                                      | 2.9 kB  00:00:00     
puppetlabs-products                                                                                                                  | 2.9 kB  00:00:00     
updates                                                                                                                              | 2.9 kB  00:00:00     
kubernetes/primary                                                                                                                   | 129 kB  00:00:00     
Loading mirror speeds from cached hostfile
kubernetes                                                                                                                                          956/956
Metadata Cache Created

[root@test01v ~]# yum install -y kubelet-1.26.0-0 kubeadm-1.26.0-0 kubectl-1.26.0-0

输出如下:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.26.0-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.26.0-0.x86_64
--> Processing Dependency: cri-tools >= 1.19.0 for package: kubeadm-1.26.0-0.x86_64
---> Package kubectl.x86_64 0:1.26.0-0 will be installed
---> Package kubelet.x86_64 0:1.26.0-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.26.0-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.26.0-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.26.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:1.2.0-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================
 Package                                       Arch                          Version                                Repository                         Size
============================================================================================================================================================
Installing:
 kubeadm                                       x86_64                        1.26.0-0                               kubernetes                         10 M
 kubectl                                       x86_64                        1.26.0-0                               kubernetes                         11 M
 kubelet                                       x86_64                        1.26.0-0                               kubernetes                         22 M
Installing for dependencies:
 conntrack-tools                               x86_64                        1.4.4-7.el7                            base                              187 k
 cri-tools                                     x86_64                        1.26.0-0                               kubernetes                        8.6 M
 kubernetes-cni                                x86_64                        1.2.0-0                                kubernetes                         17 M
 libnetfilter_cthelper                         x86_64                        1.0.0-11.el7                           base                               18 k
 libnetfilter_cttimeout                        x86_64                        1.0.0-7.el7                            base                               18 k
 libnetfilter_queue                            x86_64                        1.0.2-2.el7_2                          base                               23 k
 socat                                         x86_64                        1.7.3.2-2.el7                          base                              290 k

Transaction Summary
============================================================================================================================================================
Install  3 Packages (+7 Dependent packages)

Total download size: 69 M
Installed size: 296 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm                                                                                       | 187 kB  00:00:00     
warning: /var/cache/yum/x86_64/7/kubernetes/packages/3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Public key for 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm is not installed
(2/10): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm                               | 8.6 MB  00:00:31     
(3/10): da58cbf31a0337a968e5a06cfcc00eee420cc2df8930ea817ed2a4227bd81d48-kubeadm-1.26.0-0.x86_64.rpm                                 |  10 MB  00:00:37     
(4/10): 23e112935127da08ffd1c32c392cbf62346305ee97ba6c5d070cda422945e4ff-kubectl-1.26.0-0.x86_64.rpm                                 |  11 MB  00:00:39     
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm                                                                                |  18 kB  00:00:00     
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm                                                                                  |  23 kB  00:00:00     
(7/10): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                               | 290 kB  00:00:00     
(8/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm                                                                                |  18 kB  00:00:00     
(9/10): 9be8590c2de60e249f40726e979a3a7a046320079bc41d330834de74f5399383-kubelet-1.26.0-0.x86_64.rpm                                 |  22 MB  00:01:21     
(10/10): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm                          |  17 MB  00:01:03     
------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                       525 kB/s |  69 MB  00:02:14     
Retrieving key from http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x13EDEF05:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 Fingerprint: a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
 From       : http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Retrieving key from http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
 From       : http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                               1/10 
  Installing : socat-1.7.3.2-2.el7.x86_64                                                                                                              2/10 
  Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                               3/10 
  Installing : kubectl-1.26.0-0.x86_64                                                                                                                 4/10 
  Installing : cri-tools-1.26.0-0.x86_64                                                                                                               5/10 
  Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                 6/10 
  Installing : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                      7/10 
  Installing : kubernetes-cni-1.2.0-0.x86_64                                                                                                           8/10 
  Installing : kubelet-1.26.0-0.x86_64                                                                                                                 9/10 
  Installing : kubeadm-1.26.0-0.x86_64                                                                                                                10/10 
  Verifying  : kubeadm-1.26.0-0.x86_64                                                                                                                 1/10 
  Verifying  : kubelet-1.26.0-0.x86_64                                                                                                                 2/10 
  Verifying  : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                      3/10 
  Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                 4/10 
  Verifying  : cri-tools-1.26.0-0.x86_64                                                                                                               5/10 
  Verifying  : kubernetes-cni-1.2.0-0.x86_64                                                                                                           6/10 
  Verifying  : kubectl-1.26.0-0.x86_64                                                                                                                 7/10 
  Verifying  : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                               8/10 
  Verifying  : socat-1.7.3.2-2.el7.x86_64                                                                                                              9/10 
  Verifying  : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                              10/10 

Installed:
  kubeadm.x86_64 0:1.26.0-0                          kubectl.x86_64 0:1.26.0-0                          kubelet.x86_64 0:1.26.0-0                         

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-7.el7                cri-tools.x86_64 0:1.26.0-0                         kubernetes-cni.x86_64 0:1.2.0-0                  
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7         libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7         libnetfilter_queue.x86_64 0:1.0.2-2.el7_2        
  socat.x86_64 0:1.7.3.2-2.el7                       

Complete!

# 设置swappiness
[root@test01v ~]# echo "vm.swappiness=0" >> /etc/sysctl.d/99-kubernetes-cri.conf
[root@test01v ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces = 28633
vm.swappiness = 0

# 启动 kubelet
[root@test01v ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

使用kubeadm init初始化集群

设置初始化yaml文件

使用kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml可以打印集群初始化默认的使用的配置,
1.修改advertiseAddress的值为master的ip
2.name:node 修改为 name: ${test01v_hostname}
3.添加pod网段 podSubnet: 10.244.0.0/16
4.将imageRepository: k8s.gcr.io修改为imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

修改后得得kubeadm.yaml如下:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: ${test01_ip}
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

下载镜像

[root@test01v ~] # kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

初始化


[root@lkhtest01v liukaihui]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ${test01v_hostname}] and IPs [10.96.0.1 ${test01v_ip}]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [${test01v_hostname} localhost] and IPs [${test01v_ip} 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [${test01v_hostname} localhost] and IPs [${test01v_ip} 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.501857 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ${test01v_hostname} as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ${test01v_hostname} as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join ${test01v_ip}:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:8e764982d2c46b5af0b8c680b1c71cce860e5081621be2d66cdcdb6bea157ac2 

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [certs]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]安装基本插件:CoreDNSkube-proxy

配置环境, 让当前用户可以执行kubectl命令

[root@test01v ~]# mkdir -p $HOME/.kube
[root@test01v ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@test01v ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

将worker节点加入集群

[root@test02v ~]# kubeadm join ${test01v_ip}:6443 --token s29e99.ta7dq43dduxvvj2s \
>         --discovery-token-ca-cert-hash sha256:86879d279c43446cdac2d5787a45d123f3e9319725afa643f2f34a356a32ac88 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

检查集群情况

[root@test01v ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

[root@test01v ~]# kubectl get nodes
NAME                          STATUS   ROLES           AGE     VERSION
${test01v_hostname}   Ready    control-plane   4m16s   v1.26.0
${test02v_hostname}   Ready    <none>          52s     v1.26.0

安装网路插件flannel

1.下载flannel.yml

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

2.修改kube-flannel.yml网段

image.png

3. 部署

[root@test01v ~]# kubectl apply -f kube-flannel.yml

4.删除之前containerd自带的网络
所有节点执行如下操作

[root@test01v ~]# cd /etc/cni/net.d
[root@test01v ~]# rm -f 10-containerd-net.conflist
[root@test01v ~]# ip link delete cni0
[root@test01v ~]# systemctl  restart containerd
[root@test01v ~]# systemctl restart kubelet