k8s-demo集群搭建详细步骤15:部署Calico v3.19.4网络,实现Pod跨宿主机通讯

1,336 阅读7分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路

  • Pod跨宿主机网络通讯方案主要有Flannel,Calico,Weave,Cilium等
  • k8s-demo集群采用Calico网络组建,有两种实现方式IPIP和BGP
  • FIB(Forward Information dataBase):基于IP路由表中信息,维护着下一网络段的地址信息
  • BIRD,是 BGP 的客户端,专门负责在集群里分发路由规则信息
  • 老旧机房的路由器可能不支持BGP协议,为容器化改造带来风险或者增加成本预算
  • Calico 数据存储有两种方式,直接链接ETCD和通过k8s的API server,这里选择API server方式
  • 下载页面 github.com/containerne…github.com/projectcali…
  • 安装过程参考 projectcalico.docs.tigera.io/getting-sta…

一、下载二进制可执行程序和镜像

1、下载清单

  • 二进制可执行程序
    • calico CNI插件
    • calico-ipam IP地址管理
    • calicoctl 命令行工具,用于和calico交互
  • 容器镜像
    • calico/typha:v3.19.4 用于Calico和k8s-demo之间的通讯
    • calico/node:v3.19.4 Calico每个节点的守护进程,运行Felix和confd
    • rancher/pause:3.6 沙箱容器镜像(在安装kubelet环节已经拉取到私服仓库harbor.demo)
    • busybox:latest 用来测试

2、详细执行脚本如下

[root@master1 ~]# cd /opt/k8s/bin
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
[root@master1 bin]# tar -xvf cni-plugins-linux-amd64-v1.1.1.tgz
[root@master1 bin]# mkdir -p /opt/k8s/calico/net.d/
[root@master1 bin]# mkdir -p /opt/install/soft/calico
[root@master1 bin]# cd /opt/install/soft/calico
[root@master1 calico]# wget https://github.com/projectcalico/cni-plugin/releases/download/v3.19.4/calico-amd64
[root@master1 calico]# chmod 755 /opt/install/soft/calico/calico-amd64
[root@master1 calico]# yes | mv -f calico-amd64 /opt/k8s/bin/calico
[root@master1 calico]# wget https://github.com/projectcalico/cni-plugin/releases/download/v3.19.4/calico-ipam-amd64
[root@master1 calico]# chmod 755 /opt/install/soft/calico/calico-ipam-amd64
[root@master1 calico]#  yes | mv -f calico-ipam-amd64 /opt/k8s/bin/calico-ipam
[root@master1 calico]# wget https://github.com/projectcalico/calicoctl/releases/download/v3.19.4/calicoctl
[root@master1 calico]# chmod +x calicoctl
[root@master1 calico]#  yes | mv -f calicoctl /opt/k8s/bin/
[root@master1 calico]# cd
[root@master1 ~]# curl -u "admin:Harbor12345678" -X POST -H "Content-Type: application/json" --cert "/etc/docker/certs.d/harbor.demo/docker.client.cert" --key "/etc/docker/certs.d/harbor.demo/docker.client.key" --insecure "https://harbor.demo/api/v2.0/projects" -d '{"project_name": "'os'", "metadata": { "public": "false"}}'
[root@master1 ~]# curl -u "admin:Harbor12345678" -X POST -H "Content-Type: application/json" --cert "/etc/docker/certs.d/harbor.demo/docker.client.cert" --key "/etc/docker/certs.d/harbor.demo/docker.client.key" --insecure "https://harbor.demo/api/v2.0/projects" -d '{"project_name": "'k8s'", "metadata": { "public": "true"}}'
[root@master1 ~]# curl -u "admin:Harbor12345678" -X POST -H "Content-Type: application/json" --cert "/etc/docker/certs.d/harbor.demo/docker.client.cert" --key "/etc/docker/certs.d/harbor.demo/docker.client.key" --insecure "https://harbor.demo/api/v2.0/projects" -d '{"project_name": "'calico'", "metadata": { "public": "false"}}'
[root@master1 ~]# docker pull calico/typha:v3.19.4
v3.19.4: Pulling from calico/typha
52fda7a1d697: Pull complete
cadac74d9172: Pull complete
15e34361ee22: Pull complete
7204adc02596: Pull complete
5093053d5e59: Pull complete
Digest: sha256:3ed7b5ac31725d48d2cf5c295a968da65290cba2973c6c635d48bd2f9c39d824
Status: Downloaded newer image for calico/typha:v3.19.4
docker.io/calico/typha:v3.19.4
[root@master1 ~]# docker tag calico/typha:v3.19.4 harbor.demo/calico/typha:v3.19.4
[root@master1 ~]# docker push harbor.demo/calico/typha:v3.19.4
The push refers to repository [harbor.demo/calico/typha]
2fe06b277113: Pushed
ef06deb9e3b6: Pushed
3d13464fd0f8: Pushed
03aefd567edc: Pushed
de764afcb779: Pushed
v3.19.4: digest: sha256:f4a7202496d8d5258c87e415ed21aacb3d9092b4216256434049ea4541d17576 size: 1362
[root@master1 calico]# docker pull calico/node:v3.19.4
v3.19.4: Pulling from calico/node
7563b432e373: Pull complete
f1ad2d4094a4: Pull complete
Digest: sha256:df027832d91944516046f6baf3f6e74c5130046d2c56f88dc96296681771bc6a
Status: Downloaded newer image for calico/node:v3.19.4
docker.io/calico/node:v3.19.4
[root@master1 calico]# docker tag calico/node:v3.19.4 harbor.demo/calico/node:v3.19.4
[root@master1 calico]# docker push harbor.demo/calico/node:v3.19.4
The push refers to repository [harbor.demo/calico/node]
f03078b73155: Pushed
14ec913b26f5: Pushed
v3.19.4: digest: sha256:393ff601623e04e685add605920e6c984a1ac74e23cc4232cec7f5013ba8caad size: 737
[root@master1 ~]# docker pull rancher/pause:3.6
3.6: Pulling from rancher/pause
Digest: sha256:036d575e82945c112ef84e4585caff3648322a2f9ed4c3a6ce409dd10abc4f34
Status: Downloaded newer image for rancher/pause:3.6
docker.io/rancher/pause:3.6
[root@master1 ~]# docker tag rancher/pause:3.6 harbor.demo/k8s/pause:3.6
[root@master1 ~]# docker push harbor.demo/k8s/pause:3.6
The push refers to repository [harbor.demo/k8s/pause]
1021ef88c797: Layer already exists
3.6: digest: sha256:74bf6fc6be13c4ec53a86a5acf9fdbc6787b176db0693659ad6ac89f115e182c size: 526
[root@master1 ~]# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
50e8d59317eb: Pull complete
Digest: sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest
[root@master1 ~]# docker tag busybox harbor.demo/os/busybox
[root@master1 ~]# docker push harbor.demo/os/busybox
Using default tag: latest
The push refers to repository [harbor.demo/os/busybox]
eb6b01329ebe: Pushed
latest: digest: sha256:52f431d980baa76878329b68ddb69cb124c25efa6e206d8b0bd797a828f0528e size: 527
  • 命令行创建Harbor项目,命令行的单引号双引号严格按照上面的脚本执行,否则会提醒错误:

{"errors":[{"code":"UNPROCESSABLE_ENTITY","message":"validation failure list:\nparsing project body from "" failed, because invalid character '\'' looking for beginning of object key string"}]}

  • 如果返回如下错误,请在URL中加上 https 头,并通过命令行参数加上证书参数

308 Permanent Redirect nginx/1.19.3 或者
curl: (60) Peer's Certificate issuer is not recognized.
More details here: curl.haxx.se/docs/sslcer…
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.

二、准备证书

1、证书清单和证书用途

  • calico-cni.key CNI插件私钥
  • calico-cni.csr CNI插件证书签名请求
  • calico-cni.crt CNI插件证书,用k8s-demo集群CA证书签名
  • calico-cni.kubeconfig 用上面的证书,生成CNI插件的配置文件,用于CNI插件和k8s-demo集群通讯,这个配置文件要复制到所有集群节点上
  • typha-ca.key Typha私钥,Typha是Calico网络和k8s集群之间的通讯桥梁,把集群的变动情况通知给Felixconfd守护进程(运行在Calico-Node里面)
  • typha-ca.crt Typha CA证书,保存到k8s-demo的configmap中,方便Calido-node读取
  • typha-ca.srl Typha CA证书序列号文件
  • typha.key Typha 私钥,保存在k8s-demo的secret中
  • typha.csr Typha 证书签名请求
  • typha.crt Typha 证书,保存在k8s-demo的secret中
  • calico-node.key CalicoNode 私钥
  • calico-node.csr CalicoNode 签名请求
  • calico-node.crt CalicoNode 证书
  • k8s-demo-calico.conflist

2、创建证书详细执行脚本如下

[root@master1 ~]# mkdir -p /opt/install/soft/calico/certs
[root@master1 ~]# cd /opt/install/soft/calico/certs
[root@master1 certs]# openssl req -newkey rsa:4096 -nodes -subj "/CN=k8s-demo-calico-cni-user" -keyout calico-cni.key -out calico-cni.csr 
[root@master1 certs]# openssl x509 -req -in calico-cni.csr -days 365 \
             -CA /opt/k8s/etc/cert/ca.pem \
             -CAkey /opt/k8s/etc/cert/ca-key.pem \
             -CAcreateserial -out calico-cni.crt
[root@master1 certs]# yes | cp -f calico-cni.key /opt/k8s/etc/cert/
[root@master1 certs]# yes | cp -f calico-cni.csr /opt/k8s/etc/cert/
[root@master1 certs]# yes | cp -f calico-cni.crt /opt/k8s/etc/cert/
[root@master1 certs]# cd /opt/install/soft/calico
[root@master1 calico]# kubectl config set-cluster k8s-demo \
    --certificate-authority=/opt/k8s/etc/cert/ca.pem \
    --embed-certs=true \
    --server=$KUBE_APISERVER \
    --kubeconfig=calico-cni.kubeconfig
[root@master1 calico]# kubectl config set-credentials k8s-demo-calico-cni-user \
    --client-certificate=/opt/k8s/etc/cert/calico-cni.crt \
    --client-key=/opt/k8s/etc/cert/calico-cni.key \
    --embed-certs=true \
    --kubeconfig=calico-cni.kubeconfig
[root@master1 calico]# kubectl config set-context calico-cni-ctx \
    --cluster=k8s-demo \
    --user=k8s-demo-calico-cni-user \
    --kubeconfig=calico-cni.kubeconfig
[root@master1 calico]# kubectl config use-context calico-cni-ctx --kubeconfig=calico-cni.kubeconfig
[root@master1 calico]# cat > ${K8S_INST_DIR}/calico/k8s-demo-calico.conflist <<EOF
{
  "name": "k8s-demo-pod-network",
  "cniVersion": "0.4.0",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "datastore_type": "kubernetes",
      "mtu": 1500,
      "ipam": {
          "type": "calico-ipam"
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "${K8S_DIR}/cni/net.d/calico-cni.kubeconfig"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    }
  ]
}
EOF
[root@master1 calico]# cd /opt/install/soft/calico/certs
[root@master1 certs]# openssl req -x509 -newkey rsa:4096 -nodes -days 365 \
      -subj "/CN=Calico Typha CA" \
      -keyout typha-ca.key -out typha-ca.crt
[root@master1 certs]# openssl req -newkey rsa:4096 -nodes -subj "/CN=k8s-demo-calico-typha-sa" \
      -keyout typha.key -out typha.csr
[root@master1 certs]# openssl x509 -req -in typha.csr -days 365 \
      -CA typha-ca.crt -CAkey typha-ca.key \
      -CAcreateserial -out typha.crt
[root@master1 certs]# openssl req -newkey rsa:4096 -nodes -subj "/CN=k8s-demo-calico-node-sa" \
      -keyout calico-node.key -out calico-node.csr
[root@master1 certs]# openssl x509 -req -in calico-node.csr -days 365 \
      -CA typha-ca.crt -CAkey typha-ca.key -CAcreateserial \
      -out calico-node.crt
[root@master1 certs]# 
  • 上述证书中的用户名后面后用到,请留意:

    • k8s-demo-calico-cni-user
    • k8s-demo-calico-typha-sa
    • k8s-demo-calico-node-sa
  • k8s-demo-calico.conflist 所在的目录 ${K8S_DIR}/cni/net.d/ 和 kubelet 启动参数 --cni-conf-dir的值一致,kubelet在该目录下搜索 *.conflist 配置文件,如果没有找到CNI插件配置文件,则搜索 *.conf 或者 *.json 配置文件

3、保存上述证书到k8s-demo集群中

[root@master1 certs]# kubectl create configmap -n kube-system calico-typha-ca --from-file=typha-ca.crt
[root@master1 certs]# kubectl create secret generic -n kube-system calico-typha-certs --from-file=typha.key --from-file=typha.crt
[root@master1 certs]# kubectl create secret generic -n kube-system calico-node-certs --from-file=calico-node.key --from-file=calico-node.crt
[root@master1 certs]#

4、分发calico-cni.kubeconfig到所有节点

[root@master1 ~]# cd /opt/install/soft/calico
[root@master1 calico]# for node_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/k8s/calico/net.d/"
    scp /opt/install/soft/calico/calico-cni.kubeconfig root@${node_ip}:/opt/k8s/calico/net.d/calico-cni.kubeconfig
  done
[root@master1 calico]#

三、在k8s-demo集群中导入calico自定义资源 和 IP资源池(calico用k8s集群做数据存储)

1、IP资源池规划

  • k8s-demo集群Pod计划分为4个子网,这里先创建一个IPPool,后续测试需要的时候再创建其它3个IPPool
    • 172.66.0.0/18
    • 172.66.64.0/18
    • 172.66.128.0/18
    • 172.66.192.0/18

2、导入Calico自定义资源

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# wget https://projectcalico.docs.tigera.io/manifests/crds.yaml
[root@master1 calico]# kubectl apply -f crds.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
[root@master1 calico]#

3、创建IP资源池

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# cat > k8s-demo-calico-ip-pool-1.yaml <<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: k8s-demo-calico-ip-pool-1
spec:
  cidr: 172.66.0.0/18
  ipipMode: Never
  natOutgoing: true
  disabled: false
  nodeSelector: all()
EOF
[root@master1 calico]# export KUBECONFIG=/root/.kube/config
[root@master1 calico]# export DATASTORE_TYPE=kubernetes
[root@master1 calico]# calicoctl create -f k8s-demo-calico-ip-pool-1.yaml
Successfully created 1 'IPPool' resource(s)
[root@master1 calico]# calicoctl get ippools
NAME                        CIDR            SELECTOR
k8s-demo-calico-ip-pool-1   172.66.0.0/18   all()
[root@master1 calico]# 

ipipMode : Always, CrossSubnet, Never
vxlanMode : Always, CrossSubnet, Never
ipipMode 和 vxlanMode 二选一,不能同时配置

四、创建Calico运行所需要的角色和账户,例如ClusterRole、serviceaccount等

1、角色清单

  • k8s-demo-calico-cni-cluster-role
  • k8s-demo-calico-typha-cluster-role
  • k8s-demo-calico-node-cluster-role

2、账户清单

  • k8s-demo-calico-cni-user
  • k8s-demo-calico-typha-sa
  • k8s-demo-calico-node-sa

3、创建脚本

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# kubectl apply -f - <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-demo-calico-cni-cluster-role
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
 # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
      - clusterinformations
      - ippools
    verbs:
      - get
      - list
EOF
[root@master1 calico]# kubectl apply -f - <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-demo-calico-typha-cluster-role
rules:
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
      - endpoints
      - services
      - nodes
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      - get
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - clusterinformations
      - hostendpoints
      - blockaffinities
      - networksets
    verbs:
      - get
      - list
      - watch
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      #- ippools
      #- felixconfigurations
      - clusterinformations
    verbs:
      - get
      - create
      - update
EOF
[root@master1 calico]# kubectl apply -f - <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-demo-calico-node-cluster-role
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  # EndpointSlices are used for Service-based network policy rule
  # enforcement.
  - apiGroups: ["discovery.k8s.io"]
    resources:
      - endpointslices
    verbs:
      - watch
      - list
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      # Used to discover Typhas.
      - get
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
      # Calico stores some configuration information in node annotations.
      - update
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  # Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  # Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - networksets
      - clusterinformations
      - hostendpoints
      - blockaffinities
    verbs:
      - get
      - list
      - watch
  # Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  # Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
    verbs:
      - get
  # Block affinities must also be watchable by confd for route aggregation.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
    verbs:
      - watch
EOF
[root@master1 calico]# kubectl create serviceaccount -n kube-system k8s-demo-calico-typha-sa
[root@master1 calico]# kubectl create serviceaccount -n kube-system k8s-demo-calico-node-sa

4、绑定角色和账户

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# kubectl create clusterrolebinding crb-calico-cni --clusterrole=k8s-demo-calico-cni-cluster-role --user=k8s-demo-calico-cni-user
[root@master1 calico]# kubectl create clusterrolebinding crb-k8s-demo-calico-typha --clusterrole=k8s-demo-calico-typha-cluster-role --serviceaccount=kube-system:k8s-demo-calico-typha-sa
[root@master1 calico]# kubectl create clusterrolebinding crb-k8s-demo-calico-node --clusterrole=k8s-demo-calico-node-cluster-role --serviceaccount=kube-system:k8s-demo-calico-node-sa
[root@master1 calico]# 

五、部署Typha (k8s-demo-calico-typha-app)

1、部署 k8s-demo-calico-typha-app

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-demo-calico-typha-app
  namespace: kube-system
  labels:
    k8s-app: calico-typha-app
spec:
  replicas: 3
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      k8s-app: calico-typha-app
  template:
    metadata:
      labels:
        k8s-app: calico-typha-app
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
    spec:
      hostNetwork: true
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
      serviceAccountName: k8s-demo-calico-typha-sa
      priorityClassName: system-cluster-critical
      imagePullSecrets:
      - name: harbor-demo-secret
      containers:
      - image: harbor.demo/calico/typha:v3.19.4
        name: calico-typha
        ports:
        - containerPort: 5473
          name: calicotyphaport
          protocol: TCP
        env:
          # Disable logging to file and syslog since those don't make sense in Kubernetes.
          - name: TYPHA_LOGFILEPATH
            value: "none"
          - name: TYPHA_LOGSEVERITYSYS
            value: "none"
          # Monitor the Kubernetes API to find the number of running instances and rebalance
          # connections.
          - name: TYPHA_CONNECTIONREBALANCINGMODE
            value: "kubernetes"
          - name: TYPHA_DATASTORETYPE
            value: "kubernetes"
          - name: TYPHA_HEALTHENABLED
            value: "true"
          # Location of the CA bundle Typha uses to authenticate calico/node; volume mount
          - name: TYPHA_CAFILE
            value: /calico-typha-ca/typha-ca.crt
          # Common name on the calico/node certificate
          - name: TYPHA_CLIENTCN
            value: k8s-demo-calico-node-sa
          # Location of the server certificate for Typha; volume mount
          - name: TYPHA_SERVERCERTFILE
            value: /calico-typha-certs/typha.crt
          # Location of the server certificate key for Typha; volume mount
          - name: TYPHA_SERVERKEYFILE
            value: /calico-typha-certs/typha.key
        livenessProbe:
          httpGet:
            path: /liveness
            port: 9098
            host: localhost
          periodSeconds: 30
          initialDelaySeconds: 30
        readinessProbe:
          httpGet:
            path: /readiness
            port: 9098
            host: localhost
          periodSeconds: 10
        volumeMounts:
        - name: calico-typha-ca
          mountPath: "/calico-typha-ca"
          readOnly: true
        - name: calico-typha-certs
          mountPath: "/calico-typha-certs"
          readOnly: true
      volumes:
      - name: calico-typha-ca
        configMap:
          name: calico-typha-ca
      - name: calico-typha-certs
        secret:
          secretName: calico-typha-certs
EOF

2、部署Service :calico-typha

[root@master1 calico]# kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: calico-typha
  namespace: kube-system
  labels:
    k8s-app: calico-typha-svc
spec:
  ports:
    - port: 5473
      protocol: TCP
      targetPort: calicotyphaport
      name: calico-typha
  selector:
    k8s-app: calico-typha-app
EOF
[root@master1 calico]# 

五、部署 calico-node

[root@master1 ~]# cd /opt/install/soft/calico/
[root@master1 calico]# kubectl apply -f - <<EOF
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: k8s-demo-calico-node-app
  namespace: kube-system
  labels:
    k8s-app: calico-node-app
spec:
  selector:
    matchLabels:
      k8s-app: calico-node-app
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node-app
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: k8s-demo-calico-node-sa
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      imagePullSecrets:
      - name: harbor-demo-secret
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: harbor.demo/calico/node:v3.19.4
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            - name: FELIX_TYPHAK8SSERVICENAME
              value: calico-typha
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              value: bird
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            - name: IP_AUTODETECTION_METHOD
              value: "interface=${IFACE}"
            - name: CALICO_IPV4POOL_IPIP
              value: Always
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Disable file logging so kubectl logs works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
            # Location of the CA bundle Felix uses to authenticate Typha; volume mount
            - name: FELIX_TYPHACAFILE
              value: /calico-typha-ca/typha-ca.crt
            # Common name on the Typha certificate; used to verify we are talking to an authentic typha
            - name: FELIX_TYPHACN
              value: k8s-demo-calico-typha-sa
            # Location of the client certificate for connecting to Typha; volume mount
            - name: FELIX_TYPHACERTFILE
              value: /calico-node-certs/calico-node.crt
            # Location of the client certificate key for connecting to Typha; volume mount
            - name: FELIX_TYPHAKEYFILE
              value: /calico-node-certs/calico-node.key
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/calico-node
                - -shutdown
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -bird-ready
              - -felix-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /var/run/nodeagent
              name: policysync
            - mountPath: "/calico-typha-ca"
              name: calico-typha-ca
              readOnly: true
            - mountPath: /calico-node-certs
              name: calico-node-certs
              readOnly: true
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        - name: calico-typha-ca
          configMap:
            name: calico-typha-ca
        - name: calico-node-certs
          secret:
            secretName: calico-node-certs
EOF
[root@master1 calico]# 

六、测试 calico

1、查看服务状态

[root@master1 ~]# kubectl get secret -A | grep "k8s-demo"
kube-system       k8s-demo-calico-node-sa-token-878ct              kubernetes.io/service-account-token   3      4h20m
kube-system       k8s-demo-calico-typha-sa-token-7qdvd             kubernetes.io/service-account-token   3      4h20m
[root@master1 ~]# kubectl get cm -A
NAMESPACE         NAME                                 DATA   AGE
default           kube-root-ca.crt                     1      5d21h
kube-node-lease   kube-root-ca.crt                     1      5d21h
kube-public       kube-root-ca.crt                     1      5d21h
kube-system       calico-typha-ca                      1      3h44m
kube-system       extension-apiserver-authentication   6      6d1h
kube-system       kube-root-ca.crt                     1      5d21h
[root@master1 ~]# kubectl get pods -l k8s-app=calico-typha-app -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
k8s-demo-calico-typha-app-68cb887498-bjdq2   1/1     Running   0          3m37s
k8s-demo-calico-typha-app-68cb887498-pwcks   1/1     Running   0          3m37s
k8s-demo-calico-typha-app-68cb887498-rz5ms   1/1     Running   0          3m37s
[root@master1 install]# kubectl get pods -l k8s-app=calico-node-app -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
k8s-demo-calico-node-app-ck4pd   1/1     Running   0          67s
k8s-demo-calico-node-app-dfv59   1/1     Running   0          67s
k8s-demo-calico-node-app-k672n   1/1     Running   0          67s
k8s-demo-calico-node-app-l886z   1/1     Running   0          67s
k8s-demo-calico-node-app-nhh7j   1/1     Running   0          67s
k8s-demo-calico-node-app-nnnxp   1/1     Running   0          67s
[root@master1 ~]# kubectl get pod -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   k8s-demo-calico-node-app-ck4pd               1/1     Running   0             42m
kube-system   k8s-demo-calico-node-app-dfv59               1/1     Running   0             42m
kube-system   k8s-demo-calico-node-app-k672n               1/1     Running   0             19m
kube-system   k8s-demo-calico-node-app-l886z               1/1     Running   0             42m
kube-system   k8s-demo-calico-node-app-nhh7j               1/1     Running   0             29m
kube-system   k8s-demo-calico-node-app-nnnxp               1/1     Running   0             42m
kube-system   k8s-demo-calico-typha-app-68cb887498-bjdq2   1/1     Running   0             3h6m
kube-system   k8s-demo-calico-typha-app-68cb887498-pwcks   1/1     Running   1 (30m ago)   3h6m
kube-system   k8s-demo-calico-typha-app-68cb887498-rz5ms   1/1     Running   0             3h6m
[root@master1 ~]# kubectl get all -A
NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
kube-system   pod/k8s-demo-calico-typha-app-68cb887498-bjdq2   1/1     Running   0          4m15s
kube-system   pod/k8s-demo-calico-typha-app-68cb887498-pwcks   1/1     Running   0          4m15s
kube-system   pod/k8s-demo-calico-typha-app-68cb887498-rz5ms   1/1     Running   0          4m15s

NAMESPACE     NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
default       service/kubernetes                  ClusterIP   10.66.0.1      <none>        443/TCP    3d23h
kube-system   service/calico-typha   ClusterIP   10.66.187.51   <none>        5473/TCP   4m15s

NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/k8s-demo-calico-typha-app   3/3     3            3           4m15s

NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/k8s-demo-calico-typha-app-68cb887498   3         3         3       4m15s
[root@master1 ~]# export TYPHA_CLUSTERIP=$(kubectl get svc -n kube-system calico-typha -o jsonpath='{.spec.clusterIP}')
[root@master1 ~]# curl https://$TYPHA_CLUSTERIP:5473 -v --cacert /opt/install/soft/calico/certs/typha-ca.crt
[root@master1 install]# calicoctl  get node
NAME
master1
master2
master3
node1
node2
node3
[root@master1 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+--------------------------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |              INFO              |
+----------------+-------------------+-------+----------+--------------------------------+
| 192.168.66.132 | node-to-node mesh | up    | 17:34:01 | Established                    |
| 192.168.66.133 | node-to-node mesh | up    | 17:34:01 | Established                    |
| 192.168.66.134 | node-to-node mesh | up    | 17:43:11 | Established                    |
| 192.168.66.135 | node-to-node mesh | up    | 17:34:01 | Established                    |
| 192.168.66.136 | node-to-node mesh | up    | 17:48:45 | Established                    |
+----------------+-------------------+-------+----------+--------------------------------+

IPv6 BGP status
No IPv6 peers found.

[root@master1 ~]# 

2、创建3个busybox的Pod,测试容器跨宿主机通讯

[root@master1 ~]# kubectl create deployment pingtest --image=busybox --replicas=3 -- sleep infinity
[root@master1 ~]# kubectl get pods --selector=app=pingtest --output=wide
NAME                        READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
pingtest-585b76c894-8ct4j   1/1     Running   0          2m44s   172.66.7.0      node3   <none>           <none>
pingtest-585b76c894-f86tp   1/1     Running   0          2m44s   172.66.11.0     node2   <none>           <none>
pingtest-585b76c894-fmhlt   1/1     Running   0          2m44s   172.66.38.128   node1   <none>           <none>
[root:1275@master1 k8s-install-shell]# kubectl exec -ti pingtest-585b76c894-8ct4j -- sh
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether fa:4a:a8:24:6d:38 brd ff:ff:ff:ff:ff:ff
    inet 172.66.7.0/32 brd 172.66.7.0 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit
[root@master1 ~]# kubectl exec -ti pingtest-585b76c894-f86tp -- sh
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 6a:ac:e9:e1:25:b8 brd ff:ff:ff:ff:ff:ff
    inet 172.66.11.0/32 brd 172.66.11.0 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.66.7.0 -c 4
PING 172.66.7.0 (172.66.7.0): 56 data bytes
64 bytes from 172.66.7.0: seq=0 ttl=62 time=0.579 ms
64 bytes from 172.66.7.0: seq=1 ttl=62 time=0.401 ms
64 bytes from 172.66.7.0: seq=2 ttl=62 time=0.434 ms
64 bytes from 172.66.7.0: seq=3 ttl=62 time=0.507 ms

--- 172.66.7.0 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.401/0.480/0.579 ms
/ # exit
[root@master1 ~]#

七、问题备忘

1、requested domain name does not match the server's certificate.

[root@master1 ~]# export TYPHA_CLUSTERIP=$(kubectl get svc -n kube-system calico-typha -o jsonpath='{.spec.clusterIP}')
[root@master1 ~]# curl https://$TYPHA_CLUSTERIP:5473 -v --cacert /opt/install/soft/calico/certs/typha-ca.crt
* About to connect() to 10.66.182.107 port 5473 (#0)
*   Trying 10.66.182.107...
* Connected to 10.66.182.107 (10.66.182.107) port 5473 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /opt/install/soft/calico/certs/typha-ca.crt
  CApath: none
* Server certificate:
*       subject: CN=k8s-demo-calico-typha-sa
*       start date: 4月 16 07:54:49 2022 GMT
*       expire date: 4月 16 07:54:49 2023 GMT
*       common name: k8s-demo-calico-typha-sa
*       issuer: CN=Calico Typha CA
* NSS error -12276 (SSL_ERROR_BAD_CERT_DOMAIN)
* Unable to communicate securely with peer: requested domain name does not match the server's certificate.
* Closing connection 0
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
[root@master1 ~]#

www.cni.dev 项目地址 github.com/containerne…

参考


  • 先用起来,通过操作实践认识kubernetes(k8s),积累多了自然就理解了
  • 把理解的知识分享出来,自造福田,自得福缘
  • 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
  • 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
  • Jason@vip.qq.com 2022-4-15