部署手册 : Kubernetes 二进制部署手册 (收藏又不吃亏)

3,777 阅读12分钟

本文已参与好文召集令活动,点击查看:后端、大前端双赛道投稿,2万元奖池等你挑战!

首先分享之前的所有文章 , 欢迎点赞收藏转发三连下次一定 >>>> 😜😜😜
文章合集 : 🎁 juejin.cn/post/694164…
Github : 👉 github.com/black-ant
CASE 备份 : 👉 gitee.com/antblack/ca…

一 . 前言

本篇文章梳理了二进制安装 Kubernetes 的主流程以及异常的解决流程.

在这里首先要感谢先驱者扩展的大路 , 节省了大量时间 , 个人参考文档配的时候 , 使用的最新版 , 或多或少出现了一些问题 , 在这里整理了下来 , 用于参考.

原文地址为 :简书 , 由于文章又被审核了 , 这里附上转载的地址知乎 , 可以参照原文配 , 也可以按照我的来.

二 . 公用模块配置

公用模块需要集群中每一台机器都进行配置. 这个环节需要为每个服务器安装 Docker 同时配置 Linux 基本配置

2.1 安装 Docker

// Step 1 : 安装docker所需的工具
yum install -y yum-utils device-mapper-persistent-data lvm2

// Step 2 : 配置阿里云的docker源 (这里我是腾讯云 ,所以没有阿里源)
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

// Step 3 : 指定安装这个版本的docker-ce
yum install docker-ce docker-ce-cli containerd.io

// Step 4 : 启动docker
systemctl enable docker && systemctl start docker

// 补充命令 : 
-> 查看版本 :docker version
-> 查看指南 :docker -help
-> 查看正在运行的docker : docker ps

2.2 Kubernetes 基本配置

// Step 1 : 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld

-------------------

// Step 2 : 关闭selinux , 可以选择临时或者永久
// - 临时禁用selinux
setenforce 0
// - 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

-------------------

// Step 3 : 禁用交换分区
// - 临时禁用
swapoff -a
// - 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab

-------------------

// Step 4 : 修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

三 . Kubernetes Master 配置

安装好基础配置后 , 就可以开启 Master 服务器的配置了

3.1 Master 安装主流程

// Step 1 : 执行配置k8s阿里云源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


// Step 2 : 安装kubeadm、kubectl、kubelet
yum install -y kubectl-1.21.2-0 kubeadm-1.21.2-0 kubelet-1.21.2-0

// Step 3 : 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

// Step 4 : 初始化 ,  此环节记下配置操作(Step 5) 及 Token(Node 加入集群使用) 语句 ->  PS31014
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.21.2 --apiserver-advertise-address 11.22.33.111 --pod-network-cidr=10.244.0.0/16 --token-ttl 0

// Step 5 : 执行 admin 配置操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

// Step 6 : 查看 Node 节点情况 , 应该可以看到一个 notReady 的节点
kubectl get nodes

PS31014 初始化操作答疑

下载管理节点中用到的6个docker镜像,你可以使用docker images查看到 , 此时如果源配置正确 , 大概 5 分钟以内就会处理完成 , 但是这个过程中会查询很多问题

这里需要大概两分钟等待,会卡在[preflight] You can also perform this action in beforehand using ''kubeadm config images pull

  • image-repository : 该参数为镜像地址 ,如果下载慢或者 timeout , 需要重新选择新的地址
  • kubernetes-version : 当前的版本, 可以去官方查询最新版本
  • apiserver-advertise-address : 该地址为你的 apiServer 地址 , node 会调用该地址 (该地址需要外部可调)
  • pod-network-cidr : 指定pod网络的IP地址范围,它的值取决于你在下一步选择的哪个网络网络插件
    • 10.244.0.0/16 : Flannel
    • 192.168.0.0/16 : Calico

3.2 init 初始化完成的结果

//安装成功后会得到如下的信息 : 

Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

// 以及一条 Token 语句 , Node 节点通过该语句加入集群 , 如下
kubeadm join 11.22.33.111:6443 --token 2onice.mrw3b6dxcsdm5huv \
	--discovery-token-ca-cert-hash sha256:0aafa06c71a936868sde3e1fbf82d9fbsadf233da24c774ca80asdc0ccd36d09 


如果你一次性拿到了那个结果 , 恭喜一切顺利 , 如果出现异常 , 请参考如下问题记录 :

3.3 detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".

问题原因 : 此处由于你的 Docker 存在问题 解决方案 : 修改 Cgroup , 参考自 Hellxz博客

// Step 1 : 问题的判断
- 输出 Group 类型 : docker info|grep "Cgroup Driver"

// Step 2 : 重置 kubeadm配置
kubeadm reset
//  或者使用 echo y|kubead reset

// Step 3 : 修改 Docker
1. 打开 /etc/docker/daemon.json
2. 添加 "exec-opts": ["native.cgroupdriver=systemd"]
// PS : 没有可以直接创建 , 最终效果如下
{
 "exec-opts":["native.cgroupdriver=systemd"]
}

// Step 4 : 修改 kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

// Step 4 : 重启服务
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

// Step  5 : 校验结果 , 应该输出为 systemd
docker info|grep "Cgroup Driver"

// 补充 : 
kubelet 的配置文件 : /var/lib/kubelet/kubeadm-flags.env

3.4 Error response from daemon: Head registry-1.docker.io/v2/coredns/…: connection reset by peer

问题原因 : 主要原因为docker 源的配置问题

// 修改 /etc/docker/daemon.json 中镜像的配置 , 可以直接去阿里云中申请
{
 "registry-mirrors":["https://......mirror.aliyuncs.com"]
}

3.5 failed to pull image ..../coredns:v1.8.0: output: Error response from daemon: manifest for ...../coredns:v1.8.0 not found: manifest unknown

问题原因 : 这里最核心的关键字是 coredns ,该问题是镜像中下载 coredns 出现问题
关键字 : coredns , coredns:v1.8.0 , manifest unknown , registry.aliyuncs.com/google_containers/coredns


// Step 1 : docker 拉取 coredns
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0

// --- 同时修改 init 的 image-repository 属性 , 例如 (详见 Master 主流程)
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers ....

3.6 Failed to watch *v1.Service: failed to list *v1.Service: Get "h....../api/v1/services?limit=500&resourceVersion=0": dial tcp .....:6443: connect: connection refused

问题原因 : 原因为 api server 没有启动 , 这种情况主要是 init 后 ,但是运行时出现


// Step 1 :  查看docker 服务 , 可以看到对应的 K8S 服务
docker ps -a | grep kube | grep -v pause

// Step 2 : docker 查看 log 并且解决
 docker logs 70bc13ce697c

3.7 listen tcp 81.888.888.888:2380: bind: cannot assign requested address

详情请看 : 6.1

如果看到这里 , 你的问题还没有解决 , 参考第六节 ,问题的排查和解决流程 !!!!!!!!!!!!!!!!!!

四 . Kubernetes Nodes 配置

第一不要忘记第一节通用模块中的处理!!!

4.1 Node 创建主流程

// Step 1 : 配置阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

// Step 2 : 安装kubeadm、kubectl、kubelet
yum install -y kubeadm-1.21.2-0 kubelet-1.21.2-0

// Step 3 : 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

// Step 4 : 加入集群 (注意 , 此处时前文获取到的)
kubeadm join 11.22.33.111:6443 --token 2onice.mrw3b6dxcsdm5huv --discovery-token-ca-cert-hash sha256:0aafa06c71a936868sde3e1fbf82d9fbsadf233da24c774ca80asdc0ccd36d09 

// Step 5 : check , 如果一切正常 , 在 Master 中可以获取到如下结果
[root@VM-0-5-centos ~]# kubectl get nodes
NAME                    STATUS     ROLES                  AGE     VERSION
localhost.localdomain   NotReady   <none>                 5m24s   v1.21.2
vm-0-5-centos           NotReady   control-plane,master   37h     v1.21.2


如果安装失败, 会出现如下问题 :

4.2 configmaps "cluster-info" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-public"

问题原因 : 此处是匿名登录的问题 , 在测试环境中 , 不需要太复杂 , 添加匿名即可

kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous

// 正式环境解决 : TODO


4.3 error execution phase preflight: couldn't validate the identity of the API Server: configmaps "cluster-info" not found

问题原因 : 安装 Master 的时候 init 不成功 ,导致 API Server 参数获取出现问题 , 重装 , 注意 init 时的替换操作 (4.)

这种问题需要排查 Master 的问题 , 详见 6.2 Master 排查流程

4.5 Failed to load kubelet config file" err="failed to load Kubelet config file /var/lib/kubelet/config.yaml

如果是要把节点加到集群中 , 再未运行加入命令之前 , 该配置确实为空 <br>
**当运行了加入集群命令后 , 会主动生成配置**

4.6 failed to pull image k8s.gcr.io/kube-proxy:v1.21.2: output: Error response from daemon

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers

4.7 err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs""

**详见 3.2**

五 . Flannel 安装

5.1 什么是 Fiannel ?

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。

// TODO : Fiannel 详情

5.2 Flannel 的安装


// Step 1 : 准备 kube-flannel.yml
详见附录 , 主要是修改 url 下载路径

// Step 2 : kubectl 安装
kubectl apply -f kube-flannel.yml

// 配置好之后稍等一会就可以看到节点就绪
[root@VM-0-5-centos flannel]# kubectl get nodes
NAME                    STATUS   ROLES                  AGE    VERSION
localhost.localdomain   Ready    <none>                 131m   v1.21.2
vm-0-5-centos           Ready    control-plane,master   39h    v1.21.2

5.3 "Unable to update cni config" err="no networks found in /etc/cni/net.d"

注意 ,此处有2种场景 :

场景一 : Master 出现该问题 , flannel 可能版本和 K8S 不匹配 , 我使用的 K8S 为 1.21 , 重新安装 0.13 的 flannel 后正常

场景二 : node 出现该问题 , 原因为 node 节点缺少 cni @ blog.csdn.net/u010264186/…

  1. 复制 master cni 文件到 node 中 : scp -r master:/etc/cni /etc/
  2. 重启 : systemctl daemon-reload && systemctl restart kubelet

核心关键在 cni 文件的创建 ,成功后会在 /etc/cni/net.d 下出现一个 10-flannel.conflist 文件夹

六 . 问题及排查流程

6.1 Master kubelet 问题异常排查步骤 (以 bind: cannot assign requested address 为例)

如果上述的问题解决方案还是无法解决你的问题 , 你可能需要自行排查和定义相关的问题

通常问题会处在 init 环节中 , 如果在之前 ,大概率是镜像地址的问题 , 自行调整

问题详情: init 初始化出现问题 , 始终执行失败
解决思路:

  • 判断 Docker 运行情况
  • 查看对应 pod log
  • 根据 log 解决问题
// Step 1 : 查看 Docker 运行情况 
docker ps -a | grep kube | grep -v pause

// 这个环节可以看到异常的实例 , 如下就是 etcd 和 API server 出现了问题
"etcd --advertise-cl…"   40 seconds ago   Exited (1) 39 seconds ago  
"kube-apiserver --ad…"   39 seconds ago   Exited (1) 18 seconds ago 

---------------------

// Step 2 : 查看 Pod 对应的 log 及 查看 kubelet log
docker logs ac266e3b8189
journalctl -xeu kubelet

// 这里可以看到最终的问题详情
api-server : connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting
// PS : 了解到 127.0.0.1:2379 是 etcd 的端口 (81.888.888.888 是我服务器的 IP )
etc-server : listen tcp 81.888.888.888:2380: bind: cannot assign requested address

---------------------

// Step 3 :这里很明显就是 etcd 的问题了 , 解决问题 (查找资料后判断是 etcd 的问题 )
修改 etcd 的配置文件 /etc/kubernetes/manifests/etcd.yml , 将 IP 修改为 0.0.0.0 , 也就是没做任何限制
- --listen-client-urls=https://0.0.0.0:2379,https://0.0.0.0:2379(修改的位置)
- --listen-peer-urls=https://0.0.0.0:2380(修改的位置)


---------------------

// Step 4 : 备份上一步的 etcd.yml , 重置 K8S
kubeadm reset

// PS : 重置的过程中 , 会将 manifests 中的东西删除 , 此处记得要取出备份

---------------------

// Step 5 : 替换文件 
- 重新初始化集群
- 当/etc/kubernetes/manifests/etcd.yaml被创建出来时 , 迅速将etcd.yaml文件删除
- 将重置节点之前保存的etcd.yaml文件移动到/etc/kubernetes/manifests/目录

// PS : 操作完成后 , init 还在下载镜像 , 后续就安装成功


// 补充命令 :
- 重启 kubelet : systemctl restart kubelet.service
- 查看 kubelet 日志 : journalctl -xeu kubelet
- 查看 kubelet 状态 : systemctl status kubelet
- 查看 Docker 运行情况 : docker ps -a | grep kube | grep -v pause
- 查看 log : docker logs ac266e3b8189
- 获取所有的 node 节点 : kubectl get nodes



七 . 操作命令补充

7.1 完全卸载 Kubernetes

# 卸载服务
kubeadm reset

# 删除rpm包
rpm -qa|grep kube*|xargs rpm --nodeps -e

# 删除容器及镜像
docker images -qa|xargs docker rmi -f

7.2 API Server

https://youKurbernatesHost:6443/


// 常用 API 接口 
- 访问 Node 节点 : /api/v1/nodes
- 访问 Pods 节点 : /api/v1/pods

7.3 Master 常见命令

显示 Token 列表 : kubeadm token list

总结

虽然Kubernetes 每次都是一样的流程部署 ,但是每次都会出现各种各样的问题...

零零碎碎记录了这么多, 后续继续补充 TODO

附录

kube-flannel.yml 文件

原版的 URL 还是存在问题 , 主要修改 quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

参考和感谢

blog.csdn.net/haishan8899…

www.cnblogs.com/hellxz/p/ku…

my.oschina.net/u/4479011/b…

kubernetes.io/docs/tasks/…

docs.docker.com/engine/inst…

www.jianshu.com/p/25c01cae9…