k8s-node部署

118 阅读4分钟

此篇文档为生产系统K8S环境迁移服务,在全部文档中编号为5,内容为kubelet,kube-proxy两个组件

kubelet

  • 编辑配置文件kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "ylls",
            "OU": "ops"
        }
    ]
}

  • 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json|cfssl-json -bare kubelet

操作后产生kubelet.pem,kubelet-key.pem,kubelet.csr三个文件

  • 准备工作
ln -s /server/src/k8s-server-1.15.4 /server/k8s
cp kubelet.pem kubelet-key.pem ca.pem client.pem client-key.pem /server/k8s/certs/
mkdir -p /server/k8s/conf
mkdir -p /server/logs/k8s/kubelet
mkdir -p /server/data/kubelet
  • 生成kubelet的配置文件kubelet.kubeconfig
kubectl config set-cluster myk8s \
             --certificate-authority=/server/k8s/certs/ca.pem \
             --embed-certs=true \
             --server=https://172.27.0.19:7443 \
             --kubeconfig=kubelet.kubeconfig
             

换ca证书主要重新配置上边这个

kubectl config set-credentials k8s-node \
             --client-certificate=/server/k8s/certs/client.pem \
             --client-key=/server/k8s/certs/client-key.pem \
             --embed-certs=true \
             --kubeconfig=kubelet.kubeconfig

换client证书主要重新配置上边这个

kubectl config set-context myk8s-context \
             --cluster=myk8s \
             --user=k8s-node \
             --kubeconfig=kubelet.kubeconfig

换apiserver地址主要配置上边这个

kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

以上四个命令要级联操作

  • 创建角色绑定 k8s-node.yaml
apiVsersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding    (定义了一个名为k8s-node的ClusterRoleBinding‘集群角色绑定’资源类型)
metadata:
      name: k8s-node
roleRef:
       apiGroup: rbac.authorization.k8s.io
       kind: ClusterRole     (给一个名为k8s-node的集群用户User绑定一个集群角色ClusterRole,这个集群角色名为systeme:node,意思是使用户角色k8s-node具备成为集群运算节点node的权限)
       name: system:node
subjects:
-      apiGroup: rbac.authorization.k8s.io
       kind: User
       name: k8s-node

kubectl create -f k8s-node.yaml

kubectl get clusterrolebinding k8s-node  -o yaml  (验证结果)

以上操作在第一次创建时已经将结果保存到etcd中,以后增加同类型资源(加node)不需要再操作

在其它node点机上,只需要把证书和上一步操作中已经完成的kubelet.kubeconfig拷贝至/server/k8s/conf/即可,不需要再次操作。

  • 准备kubelet的pause基础镜像
docker pull kubernetes/pause

docker login harbor.ylls.com

docker tag kubenetes/pause:latest  harbor.ylls.com/base/pause:latest

docker push harbor.ylls.com/base/pause:latest

pause这个镜像的作用是先于所有业务镜像启动,创建网络空间等资源

  • 启动脚本startup-kubelet.sh
#!/bin/sh
./kubelet \
      --anonymous-auth=false \
      --cgroup-driver systemd \
      --cluster-dns 192.168.254.10 \
      --cluster-domain cluster.local \
      --runtime-cgroups=/systemd/system.slice \
      --kubelet-cgroups=/systemd/system.slice \
      --fail-swap-on="false" \
      --client-ca-file /server/k8s/certs/ca.pem \
      --tls-cert-file /server/k8s/certs/kubelet.pem \
      --tls-private-key-file /server/k8s/certs/kubelet-key.pem \
      --hostname-override yw1.an.bj.ylls.com \
      --image-gc-high-threshold 20 \
      --image-gc-low-threshold 10 \
      --kubeconfig /server/k8s/conf/kubelet.kubeconfig \
      --log-dir /server/logs/k8s/kubelet
      --pod-infra-container-image harbor.ylls.com/base/pause:latest
      --root-dir /server/data/kubelet

  • supersvisor的配置文件/etc/supervisord.d/kubelet.ini
[program:kubelet-20.1]
command=/server/k8s/bin/startup-kubelet.sh
numprocs=1
dirctory=/server/k8s/bin
autostart=true
autorestart=true
startsecs=30
startresties=3
exitcode=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/server/logs/k8s/kubelet/kubelet-run.log
stdout_file_maxbytes=200MB
stdout_file_backups=3
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
  • 启动并测试
systemctl supervisord update

kubectl label node yw1.an.bj.ylls.com node-role.kubernetes.io/node=  #加个集群角色标签 

kubectl get node #验证node

kube-proxy

  • 编辑配置文件kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
        "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "ylls",
            "OU": "ops"
        }
    ]
}

注意,此证书为client类型,不使用通用client证书,json中cn可以看出,cn对应了k8s中的一个默认角色system:kube-proxy,这个操作可以免除rolebinding角色绑定操作,同理可应用于node节点,这里是为演示两种操作,也是为了最少次数证书签发

  • 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json|cfssl-json -bare kube-proxy-client

操作完成后产生kube-proxy-client.pem,kube-proxy-client-key.pem,kube-proxy-client.csr三个文件

  • 准备工作
cp ca.pem kube-proxy-client.pem kube-proxy-client-key.pem /server/k8s/certs/

cd /server/k8s/conf

mkdir -p /server/logs/k8s/kube-proxy

mkdir -p /server/data/kube-proxy
  • 生成kube-proxy的配置文件kube-proxy.kubeconfig
kubectl config set-cluster myk8s \
            --certificate-authority=/server/k8s/certs/ca.pem \
            --embed-certs=true \
            --server=https://172.27.0.19:7443 \
            --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
            --client-certificate=/server/k8s/certs/kube-proxy-client.pem \
            --client-key=/server/k8s/certs/kube-proxy-client-key.pem \
            --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context myk8s-context \
             --cluster=myk8s \
             --user=kube-proxy \
             --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
  • ipvs准备

加载ipvs模块,使kube-proxy可以使用ipvs,对比iptables方式效率更高,多说一句,推荐使用nq算法,即不排队调度Never Queue Scheduling

使用以下脚本加载ipvs模块

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
    /sbin/modinfo -F filename $i &> /dev/null
    if [ $? -eq 0 ]; then
       /sbin/modprobe $i
    fi
 done
  • kube-proxy的启动脚本startup-kube-proxy.sh
./kube-proxy \
           --cluster-cidr 172.27.0.0/16 \
           --hostname-override yw1.an.bj.ylls.com \
           --proxy-mode=ipvs \
           --ipvs-scheduler=nq \
           --kubeconfig /server/k8s/conf/kube-proxy.kubeconfig
  • supervisor的配置文件/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-20.1]
command=/server/k8s/bin/startup-kube-proxy.sh
numprocs=1
dirctory=/server/k8s/bin
autostart=true
autorestart=true
startsecs=30
startresties=3
exitcode=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/server/logs/k8s/kube-proxy/kube-proxy-run.log
stdout_file_maxbytes=200MB
stdout_file_backups=3
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

验证集群

编辑一个配置清单nginx-test.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:stable
        ports:
        - containerPort: 80
kubectl create -f nginx-test.yaml

kubectl get pods