K8S集群搭建

303 阅读2分钟

1,准备3台 2核4G的虚拟机

k8s-master   172.18.245.85
k8s-work01   172.22.202.116 
k8s-work02   172.22.202.117
三台机器需要保证IP相互都能ping通

2,更新一下yum源

yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3,安装Docker环境

参考:docker容器化部署实战1

查看docker版本:
[root@hostname-B ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:05:12 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.11
  Git commit:       a89b842
  Built:            Mon Jun  6 23:03:33 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.8
  GitCommit:        9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

4,修改host文件

三台机器上各自执行以下命令:
k8s-master:sudo hostnamectl set-hostname m
k8s-work01:sudo hostnamectl set-hostname w1
k8s-work02:sudo hostnamectl set-hostname w2

然后在每台机器上修改host文件:
vi /etc/hosts
172.18.245.85 m
172.22.202.116 w1
172.22.202.117 w2

然后相互ping每个hostname通过。

5,前置配置

1,关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

2,关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3,关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

4,配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

5,将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 
EOF
sysctl --system

6,Installing kubeadm, kubelet and kubectl

安装 kubeadm

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

查看kubeadm版本

yum list kubeadm --showduplicates | sort -r

安装

yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
systemctl enable kubelet

7,将docker和k8s设置为同一个cgroup

# docker
vi /etc/docker/daemon.json
    增加:"exec-opts": ["native.cgroupdriver=systemd"],
最终daemon.json如下:
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
    "https://b9pmyelo.mirror.aliyuncs.com"
  ]
}
注意:如果registry-mirrors都链接不能用了,要及时更新一下,记得是去阿里云哪个地方去找,找不到在网上找一个最新的能访问的也行。

然后:
systemctl daemon-reload
systemctl restart docker
    
# kubelet
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
这句命令的意思是: 检验kubelet的cgroup的方式是否为systemd,如果不是,就修改,如果是,就找不到文件。
所以如果报No such file or directory 也没关系。
	
systemctl enable kubelet && systemctl start kubelet

8,拉取proxy/pause/scheduler等镜像

1,查看kubeadm使用的镜像
[root@hostname-B system]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.10
k8s.gcr.io/kube-controller-manager:v1.23.10
k8s.gcr.io/kube-scheduler:v1.23.10
k8s.gcr.io/kube-proxy:v1.23.10
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

2,解决国外镜像不能访问的问题
创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像,在三台机器上都执行一下。
-------------------------------脚本start-------------------------
#!/bin/bash

set -e

KUBE_VERSION=v1.23.10
KUBE_PAUSE_VERSION=3.6
ETCD_VERSION=3.5.1-0
CORE_DNS_VERSION=v1.8.6

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
-------------------------------脚本end-------------------------

拉取完镜像后查询一下,看看是不是7个
[root@hostname-B k8s]# docker images
REPOSITORY                           TAG       IMAGE ID       CREATED        SIZE
k8s.gcr.io/kube-apiserver            v1.25.0   4d2edfd10d3e   12 days ago    128MB
k8s.gcr.io/kube-scheduler            v1.25.0   bef2cf311509   12 days ago    50.6MB
k8s.gcr.io/kube-controller-manager   v1.25.0   1a54c86c03a6   12 days ago    117MB
k8s.gcr.io/kube-proxy                v1.25.0   58a9a0c6d96f   12 days ago    61.7MB
k8s.gcr.io/pause                     3.8       4873874c08ef   2 months ago   711kB
k8s.gcr.io/etcd                      3.5.4-0   a8a176a5d5d6   3 months ago   300MB
k8s.gcr.io/coredns                   v1.9.3    5185b96f0bec   3 months ago   48.8MB

9,kube init初始化master

kubeadm init \
  --apiserver-advertise-address=172.18.245.85 \
  --image-repository registry.aliyuncs.com/google_containers \
  --pod-network-cidr=10.244.0.0/16  \
  --service-cidr=10.96.0.0/12

成功了!

image.png 这个时候需要配置一下:

master机器:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config'  >>  $HOME/.bashrc
source ~/.bashrc

10,在master上安装flannel网络插件 (用calico也是可以的哈)

下载flannel插件的yml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改kube-flannel.yml中的镜像仓库地址为国内源
sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml

安装网络插件
kubectl apply -f kube-flannel.yml

11,两个worker节点都执行 kubeadm join

kubeadm join 172.18.245.85:6443 --token vb2mfb.egh5dp968p9jummh \
        --discovery-token-ca-cert-hash sha256:98306376db89f3672c885cb58f44eae7e79375fac6d541ba31652fbd2cf266ae

然后在master上执行 : kubectl get nodes

[root@m flannel]# kubectl get nodes
NAME   STATUS   ROLES                  AGE     VERSION
m      Ready    control-plane,master   28m     v1.23.0
w1     Ready    <none>                 7m16s   v1.23.0
w2     Ready    <none>                 6m56s   v1.23.0

如果w1和w2是 NOT Ready,则就重新执行以下命令:
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

这样就成功了!

12,测试安装nginx,

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort

查询
执行:kubectl get pod,svc
[root@m flannel]# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-85b98978db-lwbhs   0/1     ContainerCreating   0          11s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        37m
service/nginx        NodePort    10.100.222.221   <none>        80:31345/TCP   5s

访问:http://47.103.37.67:31345/

image.png