以下操作在所有的机器上执行 1.配置/etc/hosts
node_name | ip_ddress | role |
| ----------- | ------------------------ | ---------------- |
| master-node | <master-node_ip_address> | Kubernets Master |
| node1 | <node1_ip_address> | Kubernets Node |
| node2 | <node2_ip_address> | Kubernets Node |
| ... | ... | ...
2.关闭防火墙并设置SELINUX状态
systemctl stop firewalld && systemctl disable firewalld && setenforce 0
swapoff -a
3.安装docker
参考:docs.docker.com/engine/inst…
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin
4.安装kubeadm,kubelet,kubectl
(1)添加k8s 源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=<http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64> enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=<http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg> <http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg> EOF
(2)安装
yum install kubeadm-1.23.6-0 kubelet-1.23.6-0 kubectl-1.23.6-0
5.安装k8s master(只在master上执行) (1)初始化k8s master
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' | sudo tee /etc/docker/daemon.json systemctl daemon-reload
systemctl enable --now docker
systemctl restart docker
systemctl enable kubelet
systemctl restart kubelet
kubeadm reset
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU # if unprocess, please append parameter: --image-repository=registry.aliyuncs.com/google_containers , which allows you to use the repository in China
执行kubeadm init之后会得到类似下面的输出
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.113:6443 --token 6tilat.oudmckwxfl561g15 \
--discovery-token-ca-cert-hash sha256:e1cf08a533de82001662b2ef71c16d2b220a84f4cab004981ba5c782b0597314
在master上执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
保存kubeadm join的命令用于后续添加nodes
kubeadm join 192.168.0.113:6443 --token 6tilat.oudmckwxfl561g15 \
--discovery-token-ca-cert-hash sha256:e1cf08a533de82001662b2ef71c16d2b220a84f4cab004981ba5c782b0597314
(2)检查集群状态
这个时候看到的node是notready,coredns是pending ,其他的pods是running,因为还没有配置网络插件,我们选择的网络插件是flannel
kubectl get nodes
kubectl get pods -n kube-system
(3)配置网络组件flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
flannel pod running之后可以看到node是ready status,coredns running
6.添加nodes(只在nodes上执行)
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' | sudo tee /etc/docker/daemon.json systemctl daemon-reload
systemctl restart docker
systemctl enable docker.service
systemctl enable kubelet.service
systemctl restart kubelet sudo kubeadm reset
kubeadm join 192.168.0.113:6443 --token 6tilat.oudmckwxfl561g15 \
--discovery-token-ca-cert-hash sha256:e1cf08a533de82001662b2ef71c16d2b220a84f4cab004981ba5c782b0597314
执行成功后node添加成功,到此整个集群配置成功
7.检查集群 得到类似的状态
(base) [root@clx010 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
clx001 Ready <none> 60m v1.23.6
clx003 Ready <none> 60m v1.23.6
clx005 Ready <none> 60m v1.23.6
clx007 Ready <none> 60m v1.23.6
clx008 Ready <none> 60m v1.23.6
clx009 Ready <none> 60m v1.23.6
clx010 Ready control-plane,master 84m v1.23.6
(base) [root@clx010 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-64897985d-9srst 1/1 Running 0 94m
coredns-64897985d-b85dj 1/1 Running 0 94m
etcd-clx010 1/1 Running 0 94m
kube-apiserver-clx010 1/1 Running 0 40m
kube-controller-manager-clx010 1/1 Running 1 (41m ago) 94m
kube-flannel-ds-2ntzt 1/1 Running 3 (48s ago) 47m
kube-flannel-ds-527r4 1/1 Running 2 (11m ago) 47m
kube-flannel-ds-wqj2v 1/1 Running 2 (11m ago) 47m
kube-proxy-t64pm 1/1 Running 0 70m
kube-proxy-wzdn4 1/1 Running 0 47m
kube-proxy-zkl25 1/1 Running 0 71m
kube-scheduler-clx010 1/1 Running 1 (41m ago) 94m