构建Kubernetes集群踩坑笔记
1. 集群类型
kubernetes集群大体上分为两类:一主多从、多主多从;
一主多从:一台Master节点和多台node节点,搭建简单,但有单机故障的风险,适用于测试环境;
多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境;
2. 安装准备
kubeadm、minikube、二进制包
- minikube:一个用于快速搭建单节点kubernetes的工具;
- kubeadm:一个用于快速搭建kubernetes集群的工具;
- 二进制包:从官网上下载给个组件的二进制包,依次安装,此方式对于理解kubernetes组件更加有效;
2.1 主机规划参考
| 节点名称 | IP地址 | 操作系统 | 配置 |
|---|---|---|---|
| Master | 192.168.3.140 | Centos7.5或者以上 | 2核2G内存 50G磁盘 |
| Node1 | 192.168.3.141 | Centos7.5或者以上 | 2核2G内存 50G磁盘 |
| Node2 | 192.168.3.142 | Centos7.5或者以上 | 2核2G内存 50G磁盘 |
-
注意事项
- 在一下的安装操作中,都需要在三台服务器上同时操作,以规避不必要的错误;
-
本次使用的是XSHELL,同时连上服务器后进行如下操作
2.2 环境所需程序搭建
本次环境搭建需要安装三台centos服务器,集群类型是一主二从,然后再每台服务器上分别安装docker(18.06.3.ce),kubeadm(1.17.4)、kubelet(1.17.4)、kubectl(1.17.4);
3. 服务器构建
3.1 环境初始化
-
检查操作系统的版本
cat /etc/redhat-release -
主机名解析
为了方便后面集群节点间的直接调用,在这配置一下主机名解析,企业中推荐使用内部DNS服务器
# 主机名解析,编辑三台服务器的/etc/hosts文件,添加下面内容 192.168.3.141 master 192.168.3.142 node1 192.168.3.143 node2 -
时间同步
kubernetes要求集群中的节点时间必须精确一致,这里直接使用chronyd服务器从网络同步时间,企业中建议配置内部的时间同步服务器
# 启动chronyd服务 # systemctl start chronyd # 设置chronyd服务开机自启 # systemctl enable chronyd # chronyd服务启动稍等几分钟,就可以使用date命令验证时间了 # date -
禁用iptables和firewalld
kubernetes和docker在运行中会产生大量的iptables规则,为了不让系统规则跟他们混淆,直接关闭系统的规则
# 1. 关闭firewalld 服务 # systemctl stop firewalld # 2. 关闭firewalls开机自启 # systemctl disable firewalld # 3. 关闭iptables服务 # systemctl stop iptables # systemctl disable iptables -
禁用selinux
selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题
# 编辑 /etc/selinux/config 文件,修改selinux的值为disabled # 注意,修改完之后要重启linux SELINUX=disabled -
禁用swap分区
swap分区指的是虚拟内存分区,它的作用是在
物理内存使用完之后,将磁盘空间虚拟成内存来使用,启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁止用swap设备,但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明# 编辑分区配置文件, /etc/fstab,注释掉swap分区一行 # 注意修改完毕之后需要重启linux服务
-
修改linux的内核参数
# 修改linux的内核参数,添加网桥过滤和地址转发功能 # 编辑 /etc/sysctl.d/kubernetes.conf 文件,添加如下配置 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 # 重新加载配置 # sysctl -p # 加载网桥过滤模块 # modprobe br_netfilter # 查看网桥过滤模块是否加载成功 # lsmod | grep br_netfilter
-
配置IPVS功能
在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的,两者比较的话,ipvs的性能明显要高一点,但是如果要使用它的话,需要手动载入ipvs模块
# 1. 安装ipset和ipvsadm # yum install ipset ipvsadmin -y # 2. 添加需要加载的模块写入脚本文件 # cat <<EOF> /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_pv4 EOF # 3. 为脚本文件添加可执行权限 # chmod +x /etc/sysconfig/modules/ipvs.modules # 4. 执行脚本文件 # /bin/bash /etc/sysconfig/modules/ipvs.modules # 5. 查看对应的模块是都加载成功 # lsmod |grep -e ip_vs -e nf_conntrack_ipv4 -
重启服务器
# reboot
- 因为我在操作一下步骤时,经常会遇到各种问题,所以建议,到这一步的时候,给服务器拍摄一下快照;
3.2 安装Docker
安装docker的方式有很多,以下是我在之前写的安装脚本里拷贝出来的安装方式,一条命令一条命令的去复制执行,一般没什么问题;
# docker
yum -y update
yum remove docker docker-common docker-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum -y install --setopt=obsoletes=0 docker-ce-18.06.3.ce
systemctl enable docker
# 镜像加速
# Docker 在默认情况下使用Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://ucbw7vfc.mirror.aliyuncs.com"]
}
EOF
# 查看
# more /etc/docker/daemon.json
# 启动停止命令
systemctl start docker.service
systemctl stop docker.service
友情提醒:此处再次建议拍摄一下快照;
3.3 安装kubernetes组件
# 由于kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源
# 编辑 /etc/yum.repos.d/kubernetes.repo,添加下面的配置
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 安装kubeadm、kubelet和kubectl
# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
# 配置kubelet和cgroup
# 编辑 /etc/sysconfig/kubelet,添加下面配置
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
# 设置kubelet开机自启
# systemctl enable kubelet
systemctl disable kubelet
systemctl start kubelet
systemctl restart kubelet
3.4 准备集群镜像
因为k8s的镜像是在国外,国内大多数网络是无法直接访问的,好就好在可以从阿里云上下载相对应的镜像,可是下载的镜像打的是阿里的标签,所以我就把下载后的镜像标签从新命名成k8s默认的,不过看网上很多没有替换的也可以,如果你有强迫症,可以都试一下,记得拍摄好快照就可以;
# 所需镜像通过以下命令查看
# kubeadm config images list
# 下载
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.aliyuncs.com/google_containers/coredns:1.6.5
# 替换
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4 \
k8s.gcr.io/kube-apiserver:v1.17.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4 \
k8s.gcr.io/kube-controller-manager:v1.17.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4 \
k8s.gcr.io/kube-scheduler:v1.17.4
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.17.4 \
k8s.gcr.io/kube-proxy:v1.17.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 \
k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 \
k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.5 \
k8s.gcr.io/coredns:1.6.5
# 删除
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.17.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker rmi registry.aliyuncs.com/google_containers/coredns:1.6.5
友情提醒:拍摄快照
3.5 集群初始化
下面开始对集群进行初始化,并将node节点加入到集群中,注意,这里的操作只需要在master节点上执行就可以了,不需要在node1和node2节点上;
master
# 创建集群
# kubeadm init \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.3.141
只要看到如下输出,就说明安装初始化成功了
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.3.141:6443 --token 6k1unc.55mcieza3zjdd2o8 \
--discovery-token-ca-cert-hash sha256:e201238cb0682e5869342ac66e1c691aec489d485d864a621f3dacc4fb217560
不要忽略输出内容
# 创建相对应的目录
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 在需要加入的节点中执行查看步骤3.6
kubeadm join 192.168.3.141:6443 --token 6k1unc.55mcieza3zjdd2o8 \
--discovery-token-ca-cert-hash sha256:e201238cb0682e5869342ac66e1c691aec489d485d864a621f3dacc4fb217560
# 添加k8s网络,在步骤3.7
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
3.6 加入主节点以及其余工作节点
在需要加入到集群中的节点,例如node1或者node2中执行
kubeadm join 192.168.3.141:6443 --token phi4bc.cpwbot0919ic356s \
--discovery-token-ca-cert-hash sha256:4aabc6e4a4ff9629cf1bb80335dffaaf8a7d9a6ac7d76795064f9636e923c7ea
输出如下结果,说明加入成功了:
[root@node1 ~]# kubeadm join 192.168.3.141:6443 --token 6k1unc.55mcieza3zjdd2o8 \
> --discovery-token-ca-cert-hash sha256:e201238cb0682e5869342ac66e1c691aec489d485d864a621f3dacc4fb217560
W0828 22:27:41.378893 115909 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3.7 安装网络插件
k8s支持多种网络插件,比如flannel,calico,canal等。任选一种使用即可,本次使用flannel
master
只在master节点上执行
# 获取fannel的配置文件,这个连接,很多时候是无法直接访问的
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 如果无法直接访问,可以先把kube-flannel.yml文件下载下来,在本篇的末尾有附上
# 修改文件中quay.io仓库为quay-mirror.qiniu.com,末尾处附上的已经是修改后的
# 使用配置文件启动fannel
# kubectl apply -f kube-flannel.yml
# 卸载flannel : https://blog.csdn.net/footrip/article/details/104565568
# 稍等片刻,再次查看集群节点的状态
# kubectl get nodes
3.8 可能会出现的报错
-
NotReady
在使用 kubectl get nodes 命令,在master节点上查看集群状态时,可能会出现NotReady的提示,这个就是集群节点没有准备好,这个时候可以再node1或者node2节点上,执行命令 以下命令,查看节点日志;
# journalctl -f -u kubelet如果日志中有
"Unable to update cni config" err="no networks found in /etc/cni/net.d"则把master上的 /etc/cni/net.d/下的文件,复制到node上
# scp /etc/cni/net.d/* 192.168.3.142:/etc/cni/net.d/
# scp /etc/cni/net.d/* 192.168.3.143:/etc/cni/net.d/
3.9 kube-flannel.yml
参考的博客:blog.csdn.net/qq_22409661…
直接复制下面内容,执行
cat <<EOF > kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF
\