ubuntu20.04安装k8s(使用containerd方式管理)

1,261 阅读2分钟

三台主机

node1   172.16.195.136  (作为控制层面的节点)
node2   172.16.195.137
node3   172.16.195.138

安装软件(三台都安装)

#安装containerd,本示例使用containerd管理容器
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装containerd
sudo apt-get -y update
sudo apt-get -y install containerd

主机初始化配置(可以三台主机都设置上)

# 集群主机名
echo "172.16.195.136  node1" >> /etc/hosts
echo "172.16.195.137  node2" >> /etc/hosts
echo "172.16.195.138  node3" >> /etc/hosts
# 交换分区
sed -i 's/^\(.*swap.*\)$/#\1/g' /etc/fstab
swapoff -a
# 网络内核模块
tee /etc/modules-load.d/containerd.conf  << EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

# 数据包转发
tee /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
# k8s源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
systemctl enable kubelet --now

生成containerd的默认配置文件(三台主机都设置上)

mkdir /etc/containerd
containerd config default | tee /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause:3.8#registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9#g' /etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g'  /etc/containerd/config.toml
#重启containerd
systemctl restart containerd
修改crictl配置文件,获得containerd的sock信息,没有该文件的话创建一个(三台主机都设置上)
root@node1:~# cat /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false

初始化(node1即控制层面执行)
 kubeadm init --kubernetes-version=1.26.3 --apiserver-advertise-address=172.16.195.136 --apiserver-bind-port=6443 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///run/containerd/containerd.sock
 
 注意事项:apiserver-advertise-address需要使用本机上网卡的ip,否则的话会导致etcd绑定ip失败启动不了,从而apiserver也启动不了

初始化成功,显示如下信息

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.195.136:6443 --token mv274d.pnyhrlx36y6dle1b \
        --discovery-token-ca-cert-hash sha256:59b2b7da05ffe85d0686595a1a3e388f1bd403e045e85712c5884faf6cdf0ea7 

按着上面的提示执行命令(node1即控制层面执行)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

获取flannel(node1即控制层面执行)

访问https://kubernetes.io/docs/concepts/cluster-administration/addons/  获取flannel

image.png

使用kubectl部署flannel

image.png

执行命令
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

执行完后,查看flannel是否启动成功

image.png

可以看到,当flannel运行成功后,coreDNS也运行成功了 image.png

查看节点状态,显示ready

image.png

数据层面节点配置(node2和node3执行)

和前面一样安装好kubeadm和kubelet
安装好后,执行下面命令加入节点
kubeadm join 172.16.195.136:6443 --token mv274d.pnyhrlx36y6dle1b \
        --discovery-token-ca-cert-hash sha256:59b2b7da05ffe85d0686595a1a3e388f1bd403e045e85712c5884faf6cdf0ea7 

排错

  • 初始化时候会出现multipath错误, 通过journalctl -xeu kubelet | grep Failed 命令查看

image.png 解决办法 修改/etc/multipath.conf文件,添加

blacklist {
    devnode "^sd([a-z])"
}

重启服务 systemctl restart multipathd.service