kubeadm部署一主二从k8s集群

3,344 阅读2分钟

0. 前置工作

使用multipass创建三台ubuntu更快捷

multipass launch -n k8s-master -c 2 -m 4G -d 16G
multipass launch -n k8s-node01 -c 2 -m 4G -d 16G
multipass launch -n k8s-node02 -c 2 -m 4G -d 16G

image.png

1. virtual box创建三台ubuntu server 20.04虚拟机,网络选择桥接模式

主机ip
k8s-master192.168.1.248
k8s-node1192.168.1.106
k8s-node2192.168.1.251
  • ubuntu 20.04 设置静态IP:
## 配置master,两台node配置类似
sudo vim /etc/netplan/00-installer-config.yaml

# This is the network config written by 'subiquity'
network:
  ethernets:
    enp0s3:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.1.248/24]  ## 设置成你自己实际的网络
      gateway4: 192.168.1.1
      nameservers:
          addresses: [8.8.8.8]
  version: 2

sudo netplan apply
  • 设置hosts,保存生效
192.168.1.248 k8s-master
192.168.1.106 k8s-node1
192.168.1.251 k8s-node2
  • 关闭交换分区
sudo swapoff -a

# 查看交换分区状态
sudo free -m 

2. 安装docker

docs.docker.com/engine/inst…

3. 安装kubectk\kubeadm\kubelet

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

sudo apt update
sudo apt install kubelet kubeadm kubectl

sudo systemctl enable kubelet && sudo systemctl start kubelet

4. 配置集群 - master

sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

可能遇到的问题:

  • detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
sudo vim /etc/docker/daemon.json

"exec-opts": ["native.cgroupdriver=systemd"]

sudo systemctl restart docker
  • the number of available CPUs 1 is less than the required 2
virtualbox 将虚拟机cpu设置为2
  • Error response from daemon: manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 not found:
## 手动拉取coredns镜像并打标签
sudo docker pull coredns/coredns:1.8.0

sudo docker tag coredns/coredns:1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

看到如下信息表述初始化master节点成功: image.png

## init完成之后会有提示信息:
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5. 配置集群 - node

## 将两个node节点加入集群
sudo kubeadm join 192.168.1.248:6443 --token... (init master 最后输出的信息)

此时因为还没有安装网络插件,所以节点状态不正常: image.png

## 安装calio
wget https://docs.projectcalico.org/manifests/calico.yaml

vim calico.yaml
## 修改这个值为kubeadm init中的cidr
- name: CALICO_IPV4POOL_CIDR
  value: "172.16.0.0/16"
  
kubectl apply -f calico.yaml

可能遇到的问题:

  • get pod 发现 kube-system coredns-57d4cbf879-22lwd 0/1 ErrImagePull,describe pod发现拉取不到镜像,因为之前本地已有该镜像(手动打标签),且deploy中的拉取策略为imagePullPolicy: IfNotPresent: image.png 解决:是node节点上没有该镜像,而这两个pod调度到node节点上,就会找不到镜像。参考上面手动打标签的操作,在两个node节点上手动拉取并打标签即可解决。

image.png

6. 部署集群之后的组件检查

  • dial tcp 127.0.0.1:10251: connect: connection refused

image.png

cd /etc/kubernetes/manifests
vim vim kube-scheduler.yaml             ## 将 --port=0 注释
sudo vim kube-controller-manager.yaml   ## 将 --port=0 注释
sudo systemctl restart kubelet

解决:

image.png

7. 有个节点的虚拟机挂了,删除节点,再重新加入集群

ubuntu@k8s-node2:/etc/kubernetes$ sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0911 12:25:01.398018   53191 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ubuntu@k8s-node2:/etc/kubernetes$
ubuntu@k8s-node2:/etc/kubernetes$
ubuntu@k8s-node2:/etc/kubernetes$
ubuntu@k8s-node2:/etc/kubernetes$ sudo kubeadm join 192.168.1.248:6443 --token buz01v.qcrobh663fyjlssy --discovery-token-ca-cert-hash sha256:6190832c8cdc233a9b01708d658cffb1c303006c36dec99bd82bfe90005e4b86
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


8. 如果遇到重启之后kubectl 命令不能用

ubuntu@k8s-master:~$ kubectl get pod -A
The connection to the server 192.168.1.248:6443 was refused - did you specify the right host or port?

可以查看一下磁盘使用情况,我这里发现回环设备都是100%,我的报错就是这个原因造成的。

image.png

手动清理:

sudo apt autoremove --purge snapd