kubernetes快速入门-3

239 阅读5分钟

开启掘金成长之旅!这是我参与「掘金日新计划 · 12 月更文挑战」的第7天,点击查看活动详情

3.2.安装集群组件

需要在所有节点上安装Docker、kubelet、kubectl、kubeadm

3.2.1.安装docker

https://developer.aliyun.com/mirror/docker-ce

1.配置docker的yum源
yum -y install yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast

参考

图片.png

2.安装docker,并配置镜像加速
yum -y install docker-ce
mkdir -p /etc/docker
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

systemctl daemon-reload
systemctl enable docker --now

3.2.2.安装集群工具

1.配置kubernetes镜像源为阿里云
cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2.在每个节点安装如下软件包
#kubeadm:初始化集群的指令,比如使node加入集群,将node移除集群等等
#kubelet:在集群中的每个节点上用来启动Pod和容器等
#kubectl:用来与集群通信的命令行工具
yum install kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3

#检查版本是否正确
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:40:11Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

3.启动kubelet,并加入开机自启动
systemctl enable kubelet --now

状态现在不正常,没有关系,因为我们的环境还没有完善

图片.png

3.3.集群初始化

3.3.1.Master节点下载Docker镜像

1.通过命令获取对应集群需要使用的容器镜像

[root@k8s-master ~]# kubeadm config images list --kubernetes-version v1.22.3
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

在开始初始化集群之前可以预先拉取k8s所需要的容器镜像,由于(上面的)镜像都在国外无法获取,所以通过国内镜像仓库获取

[root@k8s-master ~]# kubeadm config images pull --image-repository oldxu3957 --kubernetes-version v1.22.3
[config/images] Pulled oldxu3957/kube-apiserver:v1.22.3
[config/images] Pulled oldxu3957/kube-controller-manager:v1.22.3
[config/images] Pulled oldxu3957/kube-scheduler:v1.22.3
[config/images] Pulled oldxu3957/kube-proxy:v1.22.3
[config/images] Pulled oldxu3957/pause:3.5
[config/images] Pulled oldxu3957/etcd:3.5.0-0
[config/images] Pulled oldxu3957/coredns:v1.8.4

3.3.2.初始化Master节点

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=192.168.146.188 \
--image-repository oldxu3957 \
--kubernetes-version v1.22.3 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.19.0.0/16

#--apiserver-advertise-address:指定APIserver节点地址,建议写域名
#--image-repository:指定镜像获取仓库
#--kubernetes-version:指定k8s运行版本
#--service-cidr:指定service运行网段(内部负载均衡的网段)
#--pod-network-cidr:指定pod运行网段

图片.png

拷贝配置,授权
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# cat ./.kube/config

图片.png

3.3.3.初始化Nodes节点

图片.png

[root@k8s-node1 ~]# kubeadm join 192.168.146.188:6443 --token cvo7x8.50tzuposuhrtcha6 \
>         --discovery-token-ca-cert-hash sha256:4d0552d4359b8f11c288e8d20c631b9361b7cdcce0aa7b0ffa38ef99c1798ec4

[root@k8s-node2 ~]# kubeadm join 192.168.146.188:6443 --token cvo7x8.50tzuposuhrtcha6 \
>         --discovery-token-ca-cert-hash sha256:4d0552d4359b8f11c288e8d20c631b9361b7cdcce0aa7b0ffa38ef99c1798ec4

[root@k8s-node3 ~]# kubeadm join 192.168.146.188:6443 --token cvo7x8.50tzuposuhrtcha6 \
>         --discovery-token-ca-cert-hash sha256:4d0552d4359b8f11c288e8d20c631b9361b7cdcce0aa7b0ffa38ef99c1798ec4

#如果忘了join命令可以使用命令kubeadm token create --print-join-command重新获取

图片.png

图片.png

[root@k8s-node3 ~]# mkdir /root/.kube/
[root@k8s-master ~]# scp /etc/kubernetes/admin.conf root@192.168.146.191:/root/.kube/config
[root@k8s-node3 ~]# kubectl get nodes

#其实我们不需要使用那么多node节点可以查看

图片.png

3.3.4.安装Flannel网络插件

为了让k8s集群的Pod之间能够正常通信,必须安装Pod网络,Pod网络可以支持多种网络方案,当前环境采用FCalico模式

#STATUS状态都是NotReady
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   71m   v1.22.3
k8s-node1    NotReady   <none>                 32m   v1.22.3
k8s-node2    NotReady   <none>                 32m   v1.22.3
k8s-node3    NotReady   <none>                 27m   v1.22.3


1.下载插件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

2.修改插件分配的地址段范围
[root@k8s-master ~]# sed -i 's#10.244.0.0/16#10.19.0.0/16#g' kube-flannel.yml

3.应用插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

4.再次查看Node状态
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   105m   v1.22.3
k8s-node1    Ready    <none>                 66m    v1.22.3
k8s-node2    Ready    <none>                 66m    v1.22.3
k8s-node3    Ready    <none>                 62m    v1.22.3

#查看pod状态
[root@k8s-master ~]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-k7vlc   1/1     Running   0          37m
kube-flannel-ds-p7qrw   1/1     Running   0          37m
kube-flannel-ds-pl67v   1/1     Running   0          37m
kube-flannel-ds-s49r2   1/1     Running   0          37m
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-589d89566f-fcm2h             1/1     Running   0          107m
coredns-589d89566f-k95p2             1/1     Running   0          107m
etcd-k8s-master                      1/1     Running   0          108m
kube-apiserver-k8s-master            1/1     Running   0          108m
kube-controller-manager-k8s-master   1/1     Running   0          108m
kube-proxy-248z5                     1/1     Running   0          69m
kube-proxy-8qst6                     1/1     Running   0          107m
kube-proxy-pptv4                     1/1     Running   0          64m
kube-proxy-xdfrq                     1/1     Running   0          69m
kube-scheduler-k8s-master            1/1     Running   0          108m

图片.png

3.3.5.集群命令自动补全

[root@k8s-master ~]# yum -y install bash-completion
[root@k8s-master ~]# echo 'source <(kubectl completion bash)' >>~/.bashrc

#官网
#https://kubernetes.io/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-linu

图片.png

3.4.集群状态检查

1.查看pod
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS      AGE
coredns-589d89566f-fcm2h             1/1     Running   0             2d18h
coredns-589d89566f-k95p2             1/1     Running   0             2d18h
etcd-k8s-master                      1/1     Running   0             2d18h
kube-apiserver-k8s-master            1/1     Running   0             2d18h
kube-controller-manager-k8s-master   1/1     Running   1 (28m ago)   2d18h
kube-proxy-248z5                     1/1     Running   0             2d18h
kube-proxy-8qst6                     1/1     Running   0             2d18h
kube-proxy-pptv4                     1/1     Running   0             2d17h
kube-proxy-xdfrq                     1/1     Running   0             2d18h
kube-scheduler-k8s-master            1/1     Running   1 (28m ago)   2d18h

2.查看namespace(命名空间)
[root@k8s-master ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   2d18h
kube-flannel      Active   2d17h
kube-node-lease   Active   2d18h
kube-public       Active   2d18h
kube-system       Active   2d18h

3.5.集群环境清理

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置

kubeadm reset
ifconfig ens33 down && ip linl delete ens33
rm -rf /var/lib/cni