k8s-demo集群搭建详细步骤11:部署kube-scheduler v1.23.5高可用集群

423 阅读3分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路

  • kube-scheduler部署在3个master节点,启动后将通过竞争选举机制产生一个leader节点,其它节点为阻塞状态,当leader节点不可用后,剩余节点将再次进行选举产生新的leader节点,从而保证服务的高可用。
  • 下载页面 www.downloadkubernetes.com
  • 启动参数说明 kubernetes.io/zh/docs/ref…

一、下载

微信图片_20220409193922.png

[root@master1 ~]# cd /opt/install/
[root@master1 install]# wget https://dl.k8s.io/v1.23.5/bin/linux/amd64/kube-scheduler
[root@master1 install]# chmod +x kube-scheduler
[root@master1 install]# mv kube-scheduler /opt/k8s/bin
[root@master1 install]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp /opt/k8s/bin/kube-scheduler root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/kube-scheduler"
  done
>>> 192.168.66.131
kube-scheduler      100%   47MB 229.0MB/s   00:00
>>> 192.168.66.132
kube-scheduler      100%   47MB 153.8MB/s   00:00
>>> 192.168.66.133
kube-scheduler      100%   47MB 139.2MB/s   00:00
[root@master1 install]#

二、创建和分发kube-scheduler的kubeconfig文件

1、创建kube-scheduler的kubeconfig文件

[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# kubectl config set-cluster k8s-demo \
  --certificate-authority=/opt/k8s/etc/cert/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master1 kubeconfig]# kubectl config set-credentials k8s-demo-scheduler \
  --client-certificate=/opt/k8s/etc/cert/kube-scheduler.pem \
  --client-key=/opt/k8s/etc/cert/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master1 kubeconfig]# kubectl config set-context system:kube-scheduler \
  --cluster=k8s-demo \
  --user=k8s-demo-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
  
[root@master1 kubeconfig]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

2、分发kube-scheduler的kubeconfig文件

[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
    scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/opt/k8s/etc/kube-scheduler.kubeconfig
  done
>>> 192.168.66.131
kube-scheduler-192.168.66.131.kubeconfig    100% 6468     6.9MB/s   00:00
>>> 192.168.66.132
kube-scheduler-192.168.66.132.kubeconfig    100% 6468     3.2MB/s   00:00
>>> 192.168.66.133
kube-scheduler-192.168.66.133.kubeconfig    100% 6468     2.3MB/s   00:00

3、由于没有使用系统默认的用户名,所以要创建授权信息

系统默认的用户名是 system:kube-scheduler 这里改为 k8s-demo-scheduler

[root@master1 kubeconfig]# kubectl create rolebinding k8s-demo-scheduler-rolebinding --role=system::leader-locking-kube-scheduler --user k8s-demo-scheduler --namespace=kube-system
[root@master1 kubeconfig]# kubectl create rolebinding k8s-demo-scheduler-rolebinding-ext --role=extension-apiserver-authentication-reader --user k8s-demo-scheduler --namespace=kube-system
[root@master1 kubeconfig]# kubectl create clusterrolebinding k8s-demo-scheduler --clusterrole=system:kube-scheduler --user k8s-demo-scheduler
[root@master1 kubeconfig]# kubectl create clusterrolebinding k8s-demo-scheduler-volume --clusterrole=system:volume-scheduler --user k8s-demo-scheduler

三、创建和分发kube-scheduler参数配置文件

1、创建模板文件kube-scheduler.yaml.template

[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
  burst: 200
  kubeconfig: "/opt/k8s/etc/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
leaderElection:
  leaderElect: true
EOF

2、分发 kube-scheduler.yaml

[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${MASTER_IPS[i]}.yaml
  done
[root@master1 kubeconfig]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/opt/k8s/etc/kube-scheduler.yaml
  done
>>> 192.168.66.131
kube-scheduler-192.168.66.131.yaml    100%  405   209.5KB/s   00:00
>>> 192.168.66.132
kube-scheduler-192.168.66.132.yaml    100%  405   405.7KB/s   00:00
>>> 192.168.66.133
kube-scheduler-192.168.66.133.yaml    100%  405   172.3KB/s   00:00

四、创建和分发kube-scheduler systemd unit文件

1、创建kube-scheduler服务模板

[root@master1 ~]# cd /opt/install/service
[root@master1 servcie]# cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=${K8S_DIR}/bin/kube-scheduler \\
  --config=${K8S_DIR}/etc/kube-scheduler.yaml \\
  --kubeconfig=${K8S_DIR}/etc/controller-manager.kubeconfig \\
  --bind-address=##NODE_IP## \\
  --secure-port=10259 \\
  --tls-cert-file=${K8S_DIR}/etc/cert/scheduler.pem \\
  --tls-private-key-file=${K8S_DIR}/etc/cert/scheduler-key.pem \\
  --client-ca-file=${K8S_DIR}/etc/cert/ca.pem \\
  --requestheader-allowed-names= \\
  --requestheader-client-ca-file=${K8S_DIR}/etc/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers="X-Remote-Group" \\
  --requestheader-username-headers="X-Remote-User" \\
  --authorization-kubeconfig=${K8S_DIR}/etc/kube-scheduler.kubeconfig \\
  --authentication-kubeconfig=${K8S_DIR}/etc/kube-scheduler.kubeconfig \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

2、分发到3个master节点

[root@master1 ~]# cd /opt/install/service
[root@master1 service]# for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${MASTER_IPS[i]}.service 
  done
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
    scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
  done
>>> 192.168.66.131
kube-scheduler-192.168.66.131.service      100% 1001   647.2KB/s   00:00
>>> 192.168.66.132
kube-scheduler-192.168.66.132.service      100% 1001   512.4KB/s   00:00
>>> 192.168.66.133
kube-scheduler-192.168.66.133.service      100% 1001   539.0KB/s   00:00

五、启动并验证kube-scheduler集群服务

[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
  done
[root@master1 ~]# ss -lnpt |grep kube-sch
LISTEN   0   4096   192.168.66.131:10251     *:*   users:(("kube-scheduler",pid=15928,fd=7))
[root@master1 ~]# kubectl get leases -n kube-system
NAME                      HOLDER                                         AGE
kube-scheduler            master2_d258fa34-518c-4a9c-88f6-9a0d5c831ac4   64m
[root@master1 ~]# curl -s --cacert /opt/k8s/etc/cert/ca.pem --cert /opt/install/cert/kubectl-admin.pem  --key /opt/k8s/etc/cert/kubectl-admin-key.pem https://192.168.66.131:10252/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

  • 从上面输出信息可以看出,scheduler的Leader为master2
  • 如果上述状态异常,可以查看日志
[root@master1 ~]# journalctl -u kube-scheduler

  • 先用起来,通过操作实践认识kubernetes(k8s),积累多了自然就理解了
  • 把理解的知识分享出来,自造福田,自得福缘
  • 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
  • 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
  • Jason@vip.qq.com 2022-4-11