centOs7搭建k8s集群1.27

329 阅读7分钟

前置工作

准备多台局域网互通的云主机、安装必要应用(docker等)。

WX20230825-200703@2x.png

配置内网互通

有若干端口需要手动放行。

轻量云主机 - 主机列表 - 查看 - 安全防护 修改规则 - 添加规则 - ICMP

主机.png

主机1.png

还需开放6443端口,不然worker节点join时,会访问api server失败。

6443端口.png

环境参数配置

cat  >> /etc/hosts << EOF
172.16.0.4 master
172.16.0.3 node-1
EOF

hostnamectl set-hostname master && bash

# 禁用swap交换分区(kubernetes强制要求禁用)
# 需要重启系统
swapoff --all

# 禁止开机自启动swap交换分区
sed -i -r '/swap/ s/^/#/' /etc/fstab

# 设置grub2默认引导为0
grub2-set-default 0

# 重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-1160.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1160.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-22276d54e9754e518df481c64128cd63
Found initrd image: /boot/initramfs-0-rescue-22276d54e9754e518df481c64128cd63.img
done

# 修改Linux内核参数,添加网桥过滤器和地址转发功能
cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

# 加载网桥过滤器模块
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
lsmod | grep br_netfilter

# 安装ipset及ipvsadm
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

#授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

配置ipvs功能

yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
# 执行脚本
/etc/sysconfig/modules/ipvs.modules

# 验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装并配置containerd

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# 获取阿里云YUM源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 下载安装
yum install -y containerd.io

# 生成containerd默认配置文件
mkdir /etc/containerd -p 
containerd config default > /etc/containerd/config.toml

# 修改默认配置
vim /etc/containerd/config.toml
------------------------------------------------
SystemdCgroup = false,改为: SystemdCgroup = true

# sandbox_image = "k8s.gcr.io/pause:3.6",改为:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
------------------------------------------------
 
mkdir -p /etc/containerd/certs.d/docker.io/

cat << EOF > /etc/containerd/certs.d/docker.io/hosts.txt
[host."http://hub-mirror.c.163.com"]
  capabilities = ["pull","resolve"]
[host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull","resolve"]  
EOF

# 重启containerd
systemctl enable containerd
systemctl restart containerd

# 测试containerd
ctr image pull docker.io/library/nginx:alpine

# 测试k8s的cri二进制文件
crictl pull docker.io/library/hello-world:latest

为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。

cat <<EOF >  /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

设置kubelet为开机自启动。

systemctl enable kubelet crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

ps: 1.24后的默认容器运行时由dockershim切换为containerd,如已安装containerd,下面的安装docker组件相关非必须。

安装并配置docker组件

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6

mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
    "registry-mirrors": ["https://aoewjvel.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 启动docker并设置开机自启
systemctl enable docker --now
systemctl status docker

搭建k8s集群

k8s官网集群安装指引

在所有机器上安装kubelet、kubeadm

# 配置kubernetes.list文件
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0

systemctl enable kubelet.service --now

anz.png

master kubeadm init

该步骤各报错情况,单独记录于笔记【搭建k8s集群报错记录】。

执行init命令时,建议配置一些参数。 此处网络方案使用flannel,用--pod-network-cidr配置上ip网段。

# 修改主机hostname
hostnamectl set-hostname master

# 修改/etc/hosts 将master配置上 如下
127.0.0.1 localhost master

# 此步开始 根据环境情况 出现各种报错
# 最麻烦的是拉取镜像 可用--image-repository参数绕开
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/containerd/containerd.sock --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --v=5

...
kubeadm join 172.16.0.4:6443 --token tavwyi.zn9f1q6inmxubm05 \
        --discovery-token-ca-cert-hash sha256:6f12b860738c7595c686f3c01c3da2a1dcd103803c34383a36217ab908446a7f --cri-socket=unix:///var/run/containerd/containerd.sock --v=5
...        

或者,可以把init参数固化为yaml文件

# 生成kubeadm默认配置文件 并保存
kubeadm config print init-defaults > kubeadm-config.yaml

kubeadm init -f kubeadm-config.yaml

如需查看init时生成的token,或重新生成

# 查看token kubeadm token list # 重新生成token kubeadm token create

安装网络插件

此处使用flannel,flannel相关笔记,见【k8s网络组件之flannel】

# 如有网络限制 也可以先获取文件内容 然后手动加到云主机上
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

coreDns相关pod,要装了网络组件才能起来。

worker节点配置

worker节点安装flannel:kubectl apply -f时报错:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

应为未配置KUBECONFIG导致

mkdir -p /etc/kubernetes

# 从master节点获取配置文件内容:/etc/kubernetes/admin.conf
vim /etc/kubernetes/admin.conf

cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf

kubeadm join

[root@node-1 ~]# kubeadm join 172.16.0.4:6443 --token tavwyi.zn9f1q6inmxubm05 \
>         --discovery-token-ca-cert-hash sha256:6f12b860738c7595c686f3c01c3da2a1dcd103803c34383a36217ab908446a7f --cri-socket=unix:///var/run/containerd/containerd.sock --v=5
I0822 17:37:20.746992   13910 join.go:412] [preflight] found NodeName empty; using OS hostname as NodeName
[preflight] Running pre-flight checks
I0822 17:37:20.747193   13910 preflight.go:93] [preflight] Running general checks
I0822 17:37:20.747278   13910 checks.go:280] validating the existence of file /etc/kubernetes/kubelet.conf
I0822 17:37:20.747322   13910 checks.go:280] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0822 17:37:20.747344   13910 checks.go:104] validating the container runtime
I0822 17:37:20.787889   13910 checks.go:639] validating whether swap is enabled or not
I0822 17:37:20.787985   13910 checks.go:370] validating the presence of executable crictl
I0822 17:37:20.788039   13910 checks.go:370] validating the presence of executable conntrack
I0822 17:37:20.788073   13910 checks.go:370] validating the presence of executable ip
I0822 17:37:20.788100   13910 checks.go:370] validating the presence of executable iptables
I0822 17:37:20.788129   13910 checks.go:370] validating the presence of executable mount
I0822 17:37:20.788160   13910 checks.go:370] validating the presence of executable nsenter
I0822 17:37:20.788187   13910 checks.go:370] validating the presence of executable ebtables
I0822 17:37:20.788213   13910 checks.go:370] validating the presence of executable ethtool
I0822 17:37:20.788238   13910 checks.go:370] validating the presence of executable socat
I0822 17:37:20.788263   13910 checks.go:370] validating the presence of executable tc
I0822 17:37:20.788286   13910 checks.go:370] validating the presence of executable touch
I0822 17:37:20.788322   13910 checks.go:516] running all checks
I0822 17:37:20.799030   13910 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0822 17:37:20.799346   13910 checks.go:605] validating kubelet version
I0822 17:37:20.877610   13910 checks.go:130] validating if the "kubelet" service is enabled and active
I0822 17:37:20.890529   13910 checks.go:203] validating availability of port 10250
I0822 17:37:20.890728   13910 checks.go:280] validating the existence of file /etc/kubernetes/pki/ca.crt
I0822 17:37:20.890753   13910 checks.go:430] validating if the connectivity type is via proxy or direct
I0822 17:37:20.890817   13910 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0822 17:37:20.890866   13910 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0822 17:37:20.890897   13910 join.go:529] [preflight] Discovering cluster-info
I0822 17:37:20.890947   13910 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "172.16.0.4:6443"
I0822 17:37:20.912072   13910 token.go:118] [discovery] Requesting info from "172.16.0.4:6443" again to validate TLS against the pinned public key
I0822 17:37:20.930000   13910 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.0.4:6443"
I0822 17:37:20.930053   13910 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0822 17:37:20.930072   13910 join.go:543] [preflight] Fetching init configuration
I0822 17:37:20.930083   13910 join.go:589] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0822 17:37:20.946152   13910 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0822 17:37:20.951509   13910 interface.go:432] Looking for default routes with IPv4 addresses
I0822 17:37:20.951537   13910 interface.go:437] Default route transits interface "eth0"
I0822 17:37:20.951674   13910 interface.go:209] Interface eth0 is up
I0822 17:37:20.951720   13910 interface.go:257] Interface "eth0" has 2 addresses :[172.16.0.3/16 fe80::f816:3eff:fe5a:89a8/64].
I0822 17:37:20.951759   13910 interface.go:224] Checking addr  172.16.0.3/16.
I0822 17:37:20.951776   13910 interface.go:231] IP found 172.16.0.3
I0822 17:37:20.951795   13910 interface.go:263] Found valid IPv4 address 172.16.0.3 for interface "eth0".
I0822 17:37:20.951803   13910 interface.go:443] Found active IP 172.16.0.3 
I0822 17:37:20.956023   13910 preflight.go:104] [preflight] Running configuration dependant checks
I0822 17:37:20.956050   13910 controlplaneprepare.go:225] [download-certs] Skipping certs download
I0822 17:37:20.956063   13910 kubelet.go:121] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0822 17:37:20.956703   13910 kubelet.go:136] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0822 17:37:20.957785   13910 kubelet.go:157] [kubelet-start] Checking for an existing Node in the cluster with name "node-1" and status "Ready"
I0822 17:37:20.965231   13910 kubelet.go:172] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0822 17:37:24.086480   13910 cert_rotation.go:137] Starting client certificate rotation controller
I0822 17:37:24.087758   13910 kubelet.go:220] [kubelet-start] preserving the crisocket information for the node
I0822 17:37:24.087808   13910 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "node-1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看节点情况

[root@master ~]# kgn
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   161m    v1.27.0
node-1   Ready    <none>          6m39s   v1.27.0