k8s 1.28(cluster)

243 阅读10分钟

k8s 1.28(cluster)

所有初始化的文件都会放到 /opt/k8s-init 文件夹下,请自行创建 -- mkdir /opt/k8s-init

1. 设备环境

节点名称节点IPOS容器运行时域名
master110.9.0.200ubuntu20.04containerd
master210.9.0.201ubuntu20.04containerd
master310.9.0.202ubuntu20.04containerd
vip10.9.0.210ubuntu20.04containerdk8s.cloudnative.lucs.top

2. 基础配置(全部节点,除特殊指定)

1. 配置域名映射
root@master1:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 master1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.9.0.210 k8s.cloudnative.lucs.top
10.9.0.200 master1
10.9.0.201 master2
10.9.0.202 master3
2. 关闭防火墙
sudo ufw status  --->  	inactive为已经关闭。

# 关闭防火墙
sudo ufw disable
3. 关闭selinux
apt install policycoreutils -y
sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/selinux/config
sestatus -v
SELinux status:                 disabled
4. 设置同步

时间同步很重要! 很重要!! 很重要!!! 后面的证书之类的都会看时间的,一定要配置,反复检查!!!!

全部节点:

apt install -y chrony
systemctl enable --now chrony

master1:

root@master1:~# vim /etc/chrony/chrony.conf

# - 2 sources from 2.ubuntu.pool.ntp.org which is ipv6 enabled as well
# - 1 source from [01].ubuntu.pool.ntp.org each (ipv4 only atm)
# This means by default, up to 6 dual-stack and up to 2 additional IPv4-only
# sources will be used.
# At the same time it retains some protection against one of the entries being
# down (compare to just using one of the lines). See (LP: #1754358) for the
# discussion.
#
# About using servers from the NTP Pool Project in general see (LP: #104525).
# Approved by Ubuntu Technical Board on 2011-02-08.
# See http://www.pool.ntp.org/join.html for more information.
#pool ntp.ubuntu.com        iburst maxsources 4  ##屏蔽这些时间池,用阿里云的时间服务器
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2

server ntp1.aliyun.com iburst   ## 添加阿里云的时钟服务器

allow 10.9.0.0/16      ## 允许 10.9.0.0/16网段的使用本节点的时钟

local stratum 10      ## 这意味着如果外部时间源不可用,系统将使用本地时间作为时间源

# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys

# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift

# Uncomment the following line to turn logging on.
#log tracking measurements statistics

# Log files location.
logdir /var/log/chrony

# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0

# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
rtcsync

# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

重启chrony

root@master1:~# systemctl restart chrony

master2,master3

root@master2:~# vim /etc/chrony/chrony.conf

## 禁调时间池,添加master1为自己的时间服务器
server 10.9.0.200 iburst

root@master3:~# vim /etc/chrony/chrony.conf

## 禁调时间池,添加master1为自己的时间服务器
server 10.9.0.200 iburst

重启chrony

root@master2:~# systemctl restart chrony
root@master3:~# systemctl restart chrony

chrony 的配置文件详解:

1. #pool ntp.ubuntu.com iburst maxsources 4
这几行配置原本是用于设置 Ubuntu 的默认 NTP 服务器池(通过 ntp.ubuntu.com 和 ubuntu.pool.ntp.org 提供)。每行指定了一个 NTP 服务器池,并且用 maxsources 限制了最多使用的时间源数量。
这些行被注释掉了,意味着它们不会被使用,因为你决定使用阿里云的时间服务器。

2. server ntp1.aliyun.com iburst
这一行指定了使用阿里云的 ntp1.aliyun.com 作为时间同步服务器。
iburst 选项用于在初次连接时快速同步时间。如果连接不上服务器,chrony 会快速发送一系列请求,以便更快地同步时间。

3. allow 10.9.0.0/16
这个配置允许 10.9.0.0/16 网段内的设备访问并使用此 chrony 服务器来同步时间。
这通常用于在局域网内提供时间同步服务。

4. local stratum 10
这个配置指定了本地时间源的 Stratum 值为 10。当无法访问外部时间服务器时,chrony 将使用本地系统时钟作为时间源,并标记为 Stratum 10Stratum 值越高,表示时间的准确性越低,这样设置的目的是确保即使外部时间源不可用,系统仍然有一个可用的时间参考。

5. keyfile /etc/chrony/chrony.keys
这个指令指定了 NTP 认证使用的密钥文件的位置。该文件包含了 ID 和密钥对,用于 NTP 服务器和客户端之间的身份验证。

6. driftfile /var/lib/chrony/chrony.drift
这个指令指定了漂移文件的位置。漂移文件用于记录本地时钟的频率偏移,这样 chrony 可以在下次启动时进行更准确的时间校准。

7. #log tracking measurements statistics
这个行被注释掉了。它的作用是开启日志记录,包括时间跟踪、测量和统计信息。你可以取消注释来启用这些日志记录。

8. logdir /var/log/chrony
这个配置指定了 chrony 日志文件的存放目录。

9. maxupdateskew 100.0
这个选项用于限制时间更新的最大偏移量,单位为秒。该设置用于防止较大的时间偏差影响系统时钟。

10. rtcsync
这个指令启用了每 11 分钟一次的内核时间同步功能,用于将系统时间与硬件实时时钟 (RTC) 同步。请注意,这个设置与 rtcfile 指令不兼容。

11. makestep 1 3
这个选项指定如果时间调整超过 1 秒,在首次的三次时间更新中,chrony 会直接跳步调整系统时间,而不是逐渐校正。这对于系统启动时确保快速同步时间非常有用。
5. 配置内核参数

安装网桥

apt -y install bridge-util

加载 br_netfilter​ 内核模块,用于桥接网络过滤功能的.

modprobe br_netfilter

查看模块

lsmod | grep br_netfilter

修改内核参数配置

## 进入配置文件,将下面配置复制到文件末尾
vim /etc/sysctl.conf

net.ipv4.ip_forward=1  #启用ipv4转发
net.ipv6.conf.all.disable_ipv6=1  #禁用ipv6协议
vm.swappiness=0  #禁止使用swap空间,只有当系统OOM时才允许使用它
vm.overcommit_memory=1  #不检查物理内存是否够用
vm.panic_on_oom=0  #不启OOM
#网桥模式开启
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

修改rp_filter内核参数

vim /etc/sysctl.d/10-network-security.conf
# Turn on Source Address Verification in all interfaces to
# prevent some spoofing attacks.
net.ipv4.conf.default.rp_filter=2
net.ipv4.conf.all.rp_filter=1

加载和应用内核参数

sysctl --system
6. 配置ipvs

下载ipvs

apt -y install ipvsadm ipset

配置参数

sudo cat > /opt/k8s-init/ipvs.modules <<'EOF'
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_lc
modprobe -- ip_vs_lblc
modprobe -- ip_vs_lblcr
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_vs_dh
modprobe -- ip_vs_fo
modprobe -- ip_vs_nq
modprobe -- ip_vs_sed
modprobe -- ip_vs_ftp
modprobe -- ip_vs_sh
modprobe -- ip_tables
modprobe -- ip_set
modprobe -- ipt_set
modprobe -- ipt_rpfilter
modprobe -- ipt_REJECT
modprobe -- ipip
modprobe -- xt_set
modprobe -- br_netfilter
modprobe -- nf_conntrack
EOF

加载ipvs配置

sudo chmod 755 /opt/k8s-init/ipvs.modules && sudo bash /opt/k8s-init/ipvs.modules

## 确保重启后依然可以加载配置
sudo cp /opt/k8s-init/ipvs.modules /etc/profile.d/ipvs.modules.sh

查看是否生效

lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0
ip_vs_sed              16384  0
ip_vs_nq               16384  0
ip_vs_fo               16384  0
ip_vs_dh               16384  0
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_lc               16384  0
ip_vs                 180224  22 ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_nat                 49152  4 xt_nat,nft_chain_nat,xt_MASQUERADE,ip_vs_ftp
nf_conntrack_netlink    53248  0
nf_conntrack          180224  6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
nfnetlink              20480  5 nft_compat,nf_conntrack_netlink,nf_tables,ip_set
libcrc32c              16384  6 nf_conntrack,nf_nat,btrfs,nf_tables,raid456,ip_vs

3. 安装containerd(全部节点)

  1. containerd安装

  2. 更改配置


vim /etc/containerd/config.toml

// 将镜像改为阿里云的
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

4. 安装kubeadm、kubectl、kubelet(全部节点)

  1. 导入kuberntes的清华源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  1. 创建源文件
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.28/deb/ /" > /etc/apt/sources.list.d/kubernetes.list
  1. 安装工具
// 直接安装
apt install -y kubelet kubeadm kubectl

//也可以安装特定版本
apt-cache madison kubectl | more
export KUBERNETES_VERSION=1.28.1-00
apt-get install -y kubelet=${KUBERNETES_VERSION} kubeadm=${KUBERNETES_VERSION} kubectl=${KUBERNETES_VERSION}

// 保证软件不会更新
apt-mark hold kubelet kubeadm kubectl 
  1. 启动kubelet
systemctl enable --now kubelet

5. 部署kube-vip

官网地址 | kube-vip

  1. 生成清单(master1)
root@master1:/opt/k8s-init# export VIP=10.9.0.210
root@master1:/opt/k8s-init# export INTERFACE=ens160
root@master1:/opt/k8s-init# export KVVERSION=v0.8.0

root@master1:/opt/k8s-init# alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

// 生成清单
root@master1:/opt/k8s-init# kube-vip manifest pod \
>     --interface $INTERFACE \
>     --address $VIP \
>     --controlplane \
>     --services \
>     --arp \
>     --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
  1. 查看arp清单(master1)
root@master1:/opt/k8s-init# cat /etc/kubernetes/manifests/kube-vip.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: kube-vip
  namespace: kube-system
spec:
  containers:
  - args:
    - manager
    env:
    - name: vip_arp
      value: "true"
    - name: port
      value: "6443"
    - name: vip_nodename
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: vip_interface
      value: ens160
    - name: vip_cidr
      value: "32"
    - name: dns_mode
      value: first
    - name: cp_enable
      value: "true"
    - name: cp_namespace
      value: kube-system
    - name: svc_enable
      value: "true"
    - name: svc_leasename
      value: plndr-svcs-lock
    - name: vip_leaderelection
      value: "true"
    - name: vip_leasename
      value: plndr-cp-lock
    - name: vip_leaseduration
      value: "5"
    - name: vip_renewdeadline
      value: "3"
    - name: vip_retryperiod
      value: "1"
    - name: address
      value: 10.9.0.210
    - name: prometheus_server
      value: :2112
    image: ghcr.io/kube-vip/kube-vip:v0.8.0
    imagePullPolicy: IfNotPresent
    name: kube-vip
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
    volumeMounts:
    - mountPath: /etc/kubernetes/admin.conf
      name: kubeconfig
  hostAliases:
  - hostnames:
    - kubernetes
    ip: 127.0.0.1
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admin.conf
    name: kubeconfig
status: {}
  1. 将清单发给master2和master3(master1)
root@master1:/opt/k8s-init# scp /etc/kubernetes/manifests/kube-vip.yaml master2:/etc/kubernetes/manifests/kube-vip.yaml
root@master1:/opt/k8s-init# scp /etc/kubernetes/manifests/kube-vip.yaml master3:/etc/kubernetes/manifests/kube-vip.yaml
  1. master2和master3节点分别拉取清单中的镜像(master2和master3)
root@master2:/opt/k8s-init# nerdctl pull ghcr.io/kube-vip/kube-vip:v0.8.0
root@master3:/opt/k8s-init# nerdctl pull ghcr.io/kube-vip/kube-vip:v0.8.0

6. 初始化集群(master1)

1. 生成配置文件
root@master1:/opt/k8s-init# kubeadm config print init-defaults --component-configs KubeProxyConfiguration > kubeadm.yaml
2. 更改配置文件
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.9.0.200   // 更改为本机ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1 // 更改为本机用户名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:  // 为下面的域名和ip签证
  - k8s.cloudnative.lucs.top
  - master1
  - master2
  - master3
  - 10.9.0.210
  - 10.9.0.200
  - 10.9.0.201
  - 10.9.0.202
controlPlaneEndpoint: k8s.cloudnative.lucs.top:6443 //开启控制平面的加入
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers // 更改镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.28.1 // 更改为自己的版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16 //更改 pod 地址
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocal:
  bridgeInterface: ""
  interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
  localhostNodePorts: null
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
metricsBindAddress: ""
mode: "ipvs"  // 更改为ipvs
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
winkernel:
  enableDSR: false
  forwardHealthCheckVip: false
  networkName: ""
  rootHnsEndpointName: ""
  sourceVip: ""
3. 初始化节点
root@master1:/opt/k8s-init# kubeadm init --upload-certs --config kubeadm.yaml

信息如下

[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.200 10.9.0.210 10.9.0.201 10.9.0.202]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.9.0.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.9.0.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.505639 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \
	--control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965

执行

root@master1:/opt/k8s-init# mkdir -p $HOME/.kube
root@master1:/opt/k8s-init# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master1:/opt/k8s-init# sudo chown $(id -u):$(id -g) $HOME/.kube/config

控制平面 可以通过下面加入集群

  kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \
	--control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0

数据平面 可以通过下面加入集群

kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965

7. 加入控制平面

master2

root@master2:~# kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \
> --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0

信息如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.201 10.9.0.210 10.9.0.200 10.9.0.202]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [10.9.0.201 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [10.9.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

执行:

root@master2:~# mkdir -p $HOME/.kube
root@master2:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master2:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

master3:

root@master3:~# kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \
> --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0

信息如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.202 10.9.0.210 10.9.0.200 10.9.0.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [10.9.0.202 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [10.9.0.202 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
{"level":"warn","ts":"2024-09-05T02:45:15.10704+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:15.216045+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:15.380315+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:15.616802+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:15.958678+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:16.504533+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:17.303892+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
{"level":"warn","ts":"2024-09-05T02:45:18.530751+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"}
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

执行

root@master3:~# mkdir -p $HOME/.kube
root@master3:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master3:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

8. 去除污点

去除控制平面不能调度的污点,在任意一台执行

root@master1:/opt/k8s-init# kubectl taint nodes --all node-role.kubernetes.io/control-plane-
node/master1 untainted
node/master2 untainted
node/master3 untainted

9. 安装cni(calico)(master1)

官方 | calico

containerd的插件必需安装到/opt/cni/bin中,如下

CNI plug-in enabled

Calico must be installed as a CNI plugin in the container runtime.

This installation must use the Kubernetes default CNI configuration directory (/etc/cni/net.d​) and binary directory (/opt/cni/bin​).

1. 下载calico的manifest文件
// 这个文件时calico的crd文件
root@master1:/opt/k8s-init# curl -LO https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1724k  100 1724k    0     0  48087      0  0:00:36  0:00:36 --:--:-- 23252

// 这个是利用crd创建的cr
root@master1:/opt/k8s-init# curl -OL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   777  100   777    0     0    219      0  0:00:03  0:00:03 --:--:--   219
2. 部署 crd 文件(一定要用 kubectl create )
root@master1:/opt/k8s-init# kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

会创建很多的crd,并且创建了一个sa和rbac用于提供权限,还创建了一个deployment作为控制器.

3. 部署cr

修改cr文件

# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # 配置 Calico 网络
  calicoNetwork:
    ipPools:
    - name: default-ipv4-ippool
      blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

    # 配置 IP 地址自动检测方法
    nodeAddressAutodetectionV4:
      interface: ens160  # 通过网卡名称指定

---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

创建

root@master1:/opt/k8s-init# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

10. 全部服务(完美收尾😍)

root@master1:/opt/k8s-init# kubectl get pods -A -owide
NAMESPACE          NAME                                       READY   STATUS    RESTARTS            AGE         IP              NODE      NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-647c596749-9l22f          1/1     Running   0                   4m42s       10.244.137.66   master1   <none>           <none>
calico-apiserver   calico-apiserver-647c596749-fnclw          1/1     Running   0                   4m42s       10.244.136.2    master3   <none>           <none>
calico-system      calico-kube-controllers-6bcd4db96d-g2k9p   1/1     Running   0                   5m57s       10.244.180.1    master2   <none>           <none>
calico-system      calico-node-l5lzs                          1/1     Running   0                   5m58s       10.9.0.201      master2   <none>           <none>
calico-system      calico-node-qxbrk                          1/1     Running   0                   5m58s       10.9.0.200      master1   <none>           <none>
calico-system      calico-node-wnc2c                          1/1     Running   0                   5m58s       10.9.0.202      master3   <none>           <none>
calico-system      calico-typha-cbb7d497f-bkwpp               1/1     Running   0                   5m58s       10.9.0.202      master3   <none>           <none>
calico-system      calico-typha-cbb7d497f-hbv9m               1/1     Running   0                   5m54s       10.9.0.200      master1   <none>           <none>
calico-system      csi-node-driver-57fb5                      2/2     Running   0                   5m57s       10.244.137.65   master1   <none>           <none>
calico-system      csi-node-driver-5f9jf                      2/2     Running   0                   5m57s       10.244.180.2    master2   <none>           <none>
calico-system      csi-node-driver-fzfdj                      2/2     Running   0                   5m57s       10.244.136.1    master3   <none>           <none>
kube-system        coredns-66f779496c-d8b2f                   1/1     Running   0                   4h48m       10.4.0.5        master1   <none>           <none>
kube-system        coredns-66f779496c-xrt24                   1/1     Running   0                   4h48m       10.4.0.4        master1   <none>           <none>
kube-system        etcd-master1                               1/1     Running   2                   4h48m       10.9.0.200      master1   <none>           <none>
kube-system        etcd-master2                               1/1     Running   0                   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        etcd-master3                               1/1     Running   0                   <invalid>   10.9.0.202      master3   <none>           <none>
kube-system        kube-apiserver-master1                     1/1     Running   0                   4h48m       10.9.0.200      master1   <none>           <none>
kube-system        kube-apiserver-master2                     1/1     Running   0                   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        kube-apiserver-master3                     1/1     Running   0                   4h43m       10.9.0.202      master3   <none>           <none>
kube-system        kube-controller-manager-master1            1/1     Running   0                   4h48m       10.9.0.200      master1   <none>           <none>
kube-system        kube-controller-manager-master2            1/1     Running   0                   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        kube-controller-manager-master3            1/1     Running   0                   4h43m       10.9.0.202      master3   <none>           <none>
kube-system        kube-proxy-rz59w                           1/1     Running   0                   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        kube-proxy-t8f2z                           1/1     Running   0                   4h43m       10.9.0.202      master3   <none>           <none>
kube-system        kube-proxy-wbpmc                           1/1     Running   0                   4h48m       10.9.0.200      master1   <none>           <none>
kube-system        kube-scheduler-master1                     1/1     Running   0                   4h48m       10.9.0.200      master1   <none>           <none>
kube-system        kube-scheduler-master2                     1/1     Running   0                   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        kube-scheduler-master3                     1/1     Running   0                   4h43m       10.9.0.202      master3   <none>           <none>
kube-system        kube-vip-master1                           1/1     Running   0                   4h37m       10.9.0.200      master1   <none>           <none>
kube-system        kube-vip-master2                           1/1     Running   1 (<invalid> ago)   4h43m       10.9.0.201      master2   <none>           <none>
kube-system        kube-vip-master3                           1/1     Running   0                   4h43m       10.9.0.202      master3   <none>           <none>
tigera-operator    tigera-operator-5d56685c77-5lgj4           1/1     Running   0                   6m51s       10.9.0.201      master2   <none>           <none>

测试一下:

部署一个nginx服务

kubernetes apply -f nginx.yaml

nginx.yaml 服务信息如下

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginxapp
spec:
  selector:
    matchLabels:
      app: nginxapp
  template:
    metadata:
      labels:
        app: nginxapp
    spec:
      containers:
      - name: nginxapp
        image: registry.cn-hangzhou.aliyuncs.com/lucs/nginx
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxapp
spec:
  selector:
    app: nginxapp
  type: NodePort
  ports:
  - port: 80
    targetPort: 80

这里使用我们的vip,访问成功

image

测试一下 kube-vip 的lb功能,具体看这里配置 | lb

修改nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginxapp
spec:
  selector:
    matchLabels:
      app: nginxapp
  template:
    metadata:
      labels:
        app: nginxapp
    spec:
      containers:
      - name: nginxapp
        image: registry.cn-hangzhou.aliyuncs.com/lucs/nginx
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxapp
spec:
  selector:
    app: nginxapp
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  loadBalancerIP: 10.9.0.233

创建

kubernetes apply -f nginx.yaml

kubectl get po,svc
NAME                            READY   STATUS    RESTARTS   AGE
pod/nginxapp-67d9758fbf-ggv9h   1/1     Running   0          10s

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        14h
service/nginxapp     LoadBalancer   10.104.121.11   10.9.0.233    80:30166/TCP   10s

访问 10.9.0.233

image