CentOS7中使用kubeadm快速部署一套K8S集群

87 阅读5分钟

192.168.4.35 k8s-node01 192.168.4.36 k8s-node02 EOF


5、内核调整,将桥接的IPv4流量传递到iptables的链



cat>/etc/sysctl.d/k8s.conf<<EOFnet.bridge.bridgenfcallip6tables=1net.bridge.bridgenfcalliptables=1EOFcat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system


6、设置系统时区并同步时间服务器



yum install -y ntpdate

ntpdate time.windows.com


### 2.2.4 docker 安装



wgethttps://mirrors.aliyun.com/dockerce/linux/centos/dockerce.repoO/etc/yum.repos.d/dockerce.repowget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker docker --version Docker version 18.06.1-ce, build e68fc7a


### 2.2.5 添加kubernetes YUM软件源



$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=mirrors.aliyun.com/kubernetes/… enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=mirrors.aliyun.com/kubernetes/… mirrors.aliyun.com/kubernetes/… EOF


### 2.2.6 安装kubeadm,kubelet和kubectl


2.2.6上所有主机都需要操作,由于版本更新频繁,这里指定版本号部署



yuminstallykubelet1.15.0kubeadm1.15.0kubectl1.15.0yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 systemctl enable kubelet


### 2.3 部署Kubernetes Master


只需要在Master 节点执行,这里的apiserve需要修改成自己的master地址



[root@k8s-master ~]# kubeadm init
--apiserver-advertise-address=192.168.73.138
--image-repository registry.aliyuncs.com/google_containers
--kubernetes-version v1.15.0
--service-cidr=10.1.0.0/16
--pod-network-cidr=10.244.0.0/16


由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。  
 输出结果:



[preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.4.34] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" ......(省略) [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p HOME/.kubesudocpi/etc/kubernetes/admin.confHOME/.kube sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (idu):(id -u):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: kubernetes.io/docs/concep…

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.73.138:6443 --token 2nm5l9.jtp4zwnvce4yt4oj
--discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89


根据输出提示操作:



[root@k8s-master ~]# mkdir -p HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config [root@k8s-master ~]# sudo chown (idu):(id -u):(id -g) $HOME/.kube/config


默认token的有效期为24小时,当过期之后,该token就不可用了,  
 如果后续有nodes节点加入,解决方法如下:  
 重新生成新的token



kubeadm token create [root@k8s-master ~]# kubeadm token create 0w3a92.ijgba9ia0e3scicg [root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 0w3a92.ijgba9ia0e3scicg 23h 2019-09-08T22:02:40+08:00 authentication,signing system:bootstrappers:kubeadm:default-node-token t0ehj8.k4ef3gq0icr3etl0 22h 2019-09-08T20:58:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master ~]#


获取ca证书sha256编码hash

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a


节点加入集群



[root@k8s-node01 ~]# kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 192.168.73.138:6443 --skip-preflight-chec


### 2.4 加入Kubernetes Node


在两个 Node 节点执行  
 使用kubeadm join 注册Node节点到Matser


kubeadm join 的内容,在上面kubeadm init 已经生成好了



[root@k8s-node01 ~]# kubeadm join 192.168.4.34:6443 --token 2nm5l9.jtp4zwnvce4yt4oj
--discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89


输出内容:



[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at kubernetes.io/docs/setup/… [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


### 2.5 安装网络插件


只需要在Master 节点执行



[root@k8s-master ~]# wget raw.githubusercontent.com/coreos/flan…


修改镜像地址:(有可能默认不能拉取,确保能够访问到quay.io这个registery,否则修改如下内容)



[root@k8s-master ~]# vim kube-flannel.yml


进入编辑,把106行,120行的内容,替换如下image,替换之后查看如下为正确



[root@k8s-master ~]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64
106 image: lizhenliang/flannel:v0.11.0-amd64
120 image: lizhenliang/flannel:v0.11.0-amd64 [root@k8s-master ~]# kubectl apply -f kube-flannel.yml [root@k8s-master ~]# ps -ef|grep flannel root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr


### 2.6 查看集群node状态


查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作



[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.15.0 k8s-node01 Ready 5m22s v1.15.0 k8s-node02 Ready 5m18s v1.15.0 [root@k8s-node01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-bccdc95cf-6pdgv 1/1 Running 0 80m coredns-bccdc95cf-f845x 1/1 Running 0 80m etcd-k8s-master 1/1 Running 0 80m kube-apiserver-k8s-master 1/1 Running 0 79m kube-controller-manager-k8s-master 1/1 Running 0 80m kube-flannel-ds-amd64-chpz8 1/1 Running 0 70m kube-flannel-ds-amd64-jx56v 1/1 Running 0 70m kube-flannel-ds-amd64-tsgvv 1/1 Running 0 70m kube-proxy-d5b7l 1/1 Running 0 80m kube-proxy-f7v46 1/1 Running 0 75m kube-proxy-wqhsj 1/1 Running 0 78m kube-scheduler-k8s-master 1/1 Running 0 80m kubernetes-dashboard-8499f49758-6f6ct 1/1 Running 0 42m


只有全部都为1/1则可以成功执行后续步骤,如果flannel需检查网络情况,重新进行如下操作



kubectl delete -f kube-flannel.yml


然后重新wget,然后修改镜像地址,然后



kubectl apply -f kube-flannel.yml


### 2.7 测试Kubernetes集群


在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:



[root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed

[root@k8s-master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 443/TCP 39m service/nginx NodePort 10.1.224.251 80:32039/TCP 9s


访问地址:http://NodeIP:Port ,此例就是:http://192.168.73.138:32039  
 ![在这里插入图片描述](https://p6-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/e42e716fc09745f493289dfe8fd2c897~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MzIxMjA3NDIwNDUy:q75.awebp?rk3s=f64ab15b&x-expires=1772947808&x-signature=xkOiKQjFrl6nMFmg1s%2Bp74lu5yA%3D)


### 2.8 部署 Dashboard



[root@k8s-master ~]# wget raw.githubusercontent.com/kubernetes/…

[root@k8s-master ~]# vim kubernetes-dashboard.yaml 修改内容: 109 spec: 110 containers: 111 - name: kubernetes-dashboard 112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行

......

157 spec: 158 type: NodePort # 增加此行 159 ports: 160 - port: 443 161 targetPort: 8443 162 nodePort: 30001 # 增加此行 163 selector: 164 k8s-app: kubernetes-dashboard

[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml


在火狐浏览器访问(google受信任问题不能访问)地址: https://NodeIP:30001  
 ![在这里插入图片描述](https://p6-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/7e78664387d84e47bbfb36f7ddbca887~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MzIxMjA3NDIwNDUy:q75.awebp?rk3s=f64ab15b&x-expires=1772947808&x-signature=Suopx%2B1AwGPJ26OIuu5YU1iJvOM%3D)  
 创建service account并绑定默认cluster-admin管理员集群角色: