二进制方式部署Kubernetes

214 阅读12分钟

文章主要介绍了以二进制方式部署 Kubernetes 集群的详细过程,包括集群节点规划、软件版本、网络分配、环境准备、负载均衡器准备、配置免密登录、证书准备、各组件(etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、Calico、CoreDNS)的部署以及安装应用(nginx)验证集群可用。

1.1 集群主机规划

主机IP地址主机名主机配置主机角色安装软件
192.168.10.12k8s-master12C4Gmasterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、docker-ce
192.168.10.13k8s-master22C4Gmasterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、docker-ce
192.168.10.14k8s-master32C4Gmasterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、docker-ce
192.168.10.15k8s-worker12C4Gworkerkubelet、kube-proxy、docker-ce
192.168.10.10ha11C2GLBhaproxy、keepalived
192.168.10.11ha21C2GLBhaproxy、keepalived
192.168.10.100//虚拟ip(VIP)/

1.2 软件版本

软件名称版本备注
CentOS7kernel版本:5.16
kubernetesv1.21.10
etcdv3.5.2最新版本
calicov3.19.4
corednsv1.8.4
docker-ce20.10.13YUM源默认
haproxy5.18YUM源默认
keepalived3.5YUM源默认

1.3 网络分配

网络名称网段备注
Node网络192.168.10.0/24
Service网络10.96.0.0/16
Pod网络10.244.0.0/16

1.4 集群主机环境准备

这一章节可以参考前面的内容,主要是ip地址解析,防火墙,selinux,多机互信,关闭swap分区,时间同步,升级内核,ipvs,iptables,开启内核转发,配置网桥过滤等内容。这里不在详细赘述。

1.5 负载均衡器准备

同样的,这里也是安装haproxy,keepalived,前面章节也有详细步骤。

1.6 配置免密登录

在master1上操作

# ssh-keygen  
# ssh-copy-id root@k8s-master1  
# ssh-copy-id root@k8s-master2  
# ssh-copy-id root@k8s-master3  
# ssh-copy-id root@k8s-worker1

2.证书准备

在K8S-MASTER1操作

2.1 证书工具准备

二进制部署,需要工作目录,需要cfssl工具,它是用于创建证书的,需要配置CA证书。

2.1.1 创建工作目录

# mkdir -p /data/k8s-work

2.1.2 安装cfssl工具

# cd /data/k8s-work  
# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64  
# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64  
# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64  
  
说明:  
cfssl是使用go编写,由CloudFlare开源的一款PKI/TLS工具。主要程序有:  
  
- cfssl,是CFSSL的命令行工具  
- cfssljson用来从cfssl程序获取JSON输出,并将证书,密钥,CSR和bundle写入文件中。  
  
添加执行权限  
chmod +x cfssl*  
  
移动到指定目录下安装  
mv cfssl_linux-amd64 /usr/local/bin/cfssl  
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson  
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo  
  
验证是否安装成功  
# cfssl version  
Version: 1.2.0  
Revision: dev  
Runtime: go1.6  
显示这个代表安装成功

2.2 配置CA证书

2.2.1 配置CA证书请求文件

# cat > ca-csr.json <<"EOF"  
{  
  "CN""kubernetes",  
  "key": {  
      "algo""rsa",  
      "s  
  },  
  "names": [  
    {  
      "C": "CN",  
      "ST": "Beijing",  
      "L": "Beijing",  
      "O": "kubemsb",  
      "OU": "CN"  
    }  
  ],  
  "ca": {  
          "expiry": "87600h"  
  }  
}  
EOF

2.2.2 创建CA证书

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2.2.3 创建CA证书策略

# cat > ca-config.json <<"EOF"  
{  
  "signing": {  
      "default": {  
          "expiry""87600h"  
        },  
      "profiles": {  
          "kubernetes": {  
              "usages": [  
                  "signing",  
                  "key encipherment",  
                  "server auth",  
                  "client auth"  
              ],  
              "expiry""87600h"  
          }  
      }  
  }  
}  
EOF
server auth 表示client可以对使用该ca对server提供的证书进行验证  
  
client auth 表示server可以使用该ca对client提供的证书进行验证

3.二进制部署软件

3.1 部署etcd

3.1.1 配置etcd请求文件

# cat > etcd-csr.json <<"EOF"  
{  
  "CN""etcd",  
  "hosts": [  
    "127.0.0.1",  
    "192.168.10.12", 三个master节点  
    "192.168.10.13",  
    "192.168.10.14"  
  ],  
  "key": {  
    "algo""rsa",  
    "size"2048  
  },  
  "names": [{  
    "C""CN",  
    "ST""Beijing",  
    "L""Beijing",  
    "O""kubemsb",  
    "OU""CN"  
  }]  
}  
EOF

3.1.2 生成etcd证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

生成后查看

# ls  
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

可以看到生成了4个文件关于etcd的

3.1.3 下载etcd

image.png

image.png

image.png

复制链接

# wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz

3.1.4 安装etcd

# tar -xvf etcd-v3.5.2-linux-amd64.tar.gz  
# cp -p etcd-v3.5.2-linux-amd64/etcd* /usr/local/bin/

3.1.5 分发软件

因为k8s-master2、k8s-master3节点都需要安装etcd,所以通过scp分发到所有master节点

# scp etcd-v3.5.2-linux-amd64/etcd* k8s-master2:/usr/local/bin/  
  
# scp etcd-v3.5.2-linux-amd64/etcd* k8s-master3:/usr/local/bin/

3.1.6 创建配置文件

首先在每个节点都创建一个etcd的文件夹,注意这个文件的ip地址需要随着节点而修改,在哪个节点就修改成哪个节点的IP地址,出了ETCD_INITIAL_CLUSTER的值是整个集群的地址外,其他的都要随之改变。

# mkdir /etc/etcd  
  
# cat >  /etc/etcd/etcd.conf <<"EOF"  
#[Member]  
ETCD_NAME="etcd1"  
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  
ETCD_LISTEN_PEER_URLS="https://192.168.10.12:2380"  
ETCD_LISTEN_CLIENT_URLS="https://192.168.10.12:2379,http://127.0.0.1:2379"  
  
#[Clustering]  
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.12:2380"  
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.12:2379"  
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.10.12:2380,etcd2=https://192.168.10.13:2380,etcd3=https://192.168.10.14:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"  
ETCD_INITIAL_CLUSTER_STATE="new"  
EOF  
  
说明:  
ETCD_NAME:节点名称,集群中唯一  
ETCD_DATA_DIR:数据目录  
ETCD_LISTEN_PEER_URLS:集群通信监听地址  
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址  
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址  
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址  
ETCD_INITIAL_CLUSTER:集群节点地址  
ETCD_INITIAL_CLUSTER_TOKEN:集群Token  
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

3.1.7 创建服务配置文件

# mkdir -p /etc/etcd/ssl  
# mkdir -p /var/lib/etcd/default.etcd  
  
复制证书  
cd /data/k8s-work  
cp ca*.pem /etc/etcd/ssl  
cp etcd*.pem /etc/etcd/ssl

3.1.8 服务启动配置文件

# cat > /etc/systemd/system/etcd.service <<"EOF"  
[Unit]  
Description=Etcd Server  
After=network.target  
  
After=network-online.target  
Wants=network-online.target  
  
[Service]  
Type=notify  
EnvironmentFile=-/etc/etcd/etcd.conf  
WorkingDirectory=/var/lib/etcd/  
ExecStart=/usr/local/bin/etcd \  
  --cert-file=/etc/etcd/ssl/etcd.pem \  
  --key-file=/etc/etcd/ssl/etcd-key.pem \  
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \  
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \  
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \  
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \  
  --peer-client-cert-auth \  
  --client-cert-auth  
Restart=on-failure  
RestartSec=5  
LimitNOFILE=65536  
  
[Install]  
WantedBy=multi-user.target  
EOF

3.1.9 同步etcd配置到其他master节点

# mkdir -p /etc/etcd  
# mkdir -p /etc/etcd/ssl  
# mkdir -p /var/lib/etcd/default.etcd
服务配置文件,需要修改etcd节点名称及IP地址  
for i in k8s-master2 k8s-master3  
do   
scp /etc/etcd/etcd.conf $i:/etc/etcd/   
done
证书文件  
for i in k8s-master2 k8s-master3   
do   
scp /etc/etcd/ssl/* $i:/etc/etcd/ssl   
done
服务启动配置文件  
for i in k8s-master2 k8s-master3   
do  
scp /etc/systemd/system/etcd.service $i:/etc/systemd/system/ \  
done

3.1.10 启动etcd

# systemctl daemon-reload  
# systemctl enable --now etcd.service  
# systemctl status etcd

3.1.11 验证etcd集群状态

ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.10.12:2379,https://192.168.1013144:2379,https://192.168.10.14:2379 endpoint health
+----------------------------+--------+-------------+-------+  
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |  
+----------------------------+--------+-------------+-------+  
| https://192.168.10.14:2379 |   true | 10.393062ms |       |  
| https://192.168.10.12:2379 |   true |  15.70437ms |       |  
| https://192.168.10.13:2379 |   true | 15.871684ms |       |  
+----------------------------+--------+-------------+-------+

3.2 Kubernetes集群master部署

3.2.1 集群软件下载

wget https://dl.k8s.io/v1.21.10/kubernetes-server-linux-amd64.tar.gz

3.2.2 集群软件安装

# tar -xvf kubernetes-server-linux-amd64.tar.gz  
  
# cd kubernetes/server/bin/  
  
# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

3.2.3 集群软件分发

master节点需要kube-apiserver,kube-controller-manager,kube-scheduler,kubectl

# scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/  
# scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/

所有节点需要kubelet,kube-proxy

# scp kubelet kube-proxy k8s-master1:/usr/local/bin  
# scp kubelet kube-proxy k8s-master2:/usr/local/bin  
# scp kubelet kube-proxy k8s-master3:/usr/local/bin  
# scp kubelet kube-proxy k8s-worker1:/usr/local/bin

3.2.4 在集群节点上创建目录

在所有集群节点上执行  
# mkdir -p /etc/kubernetes/          
# mkdir -p /etc/kubernetes/ssl       
# mkdir -p /var/log/kubernetes

3.2.5 部署kube-apiserver

3.2.5.1 创建apiserver证书请求文件
# cat > kube-apiserver-csr.json << "EOF"  
{  
"CN""kubernetes",  
  "hosts": [  
    "127.0.0.1",  
    "192.168.10.12",  
    "192.168.10.13",  
    "192.168.10.14",  
    "192.168.10.15",这些是集群中已经存在的节点  
    "192.168.10.16",  
    "192.168.10.17",这些是可能添加的节点,先把它添加上,避免后期需要添加的时候再来修改,这是为后期扩展考虑  
    "192.168.10.10",  
    "192.168.10.11",  
    "192.168.10.100"  
    "10.96.0.1",  
    "kubernetes",  
    "kubernetes.default",  
    "kubernetes.default.svc",  
    "kubernetes.default.svc.cluster",  
    "kubernetes.default.svc.cluster.local"  
  ],  
  "key": {  
    "algo""rsa",  
    "size"2048  
  },  
  "names": [  
    {  
      "C""CN",  
      "ST""Beijing",  
      "L""Beijing",  
      "O""kubemsb",  
      "OU""CN"  
    }  
  ]  
}  
EOF  
  
说明:  
如果 hosts 字段不为空则需要指定授权使用该证书的 IP(含VIP) 或域名列表。由于该证书被 集群使用,需要将节点的IP都填上,为了方便后期扩容可以多写几个预留的IP。  
同时还需要填写 service 网络的首个IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1)。
3.2.5.2 生成apiserver证书及token文件
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
# cat > token.csv << EOF  
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"  
EOF  
  
说明:  
创建TLS机制所需TOKEN  
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy与kube-apiserver进行通信,  
必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。  
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,  
kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。  
所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
3.2.5.3 创建apiserver配置文件
# cat > /etc/kubernetes/kube-apiserver.conf << "EOF"  
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \  
  --anonymous-auth=false \  
  --bind-address=192.168.10.12 \  
  --secure-port=6443 \  
  --advertise-address=192.168.10.12 \  
  --insecure-port=0 \  
  --authorization-mode=Node,RBAC \  
  --runtime-config=api/all=true \  
  --enable-bootstrap-token-auth \  
  --service-cluster-ip-range=10.96.0.0/16 \  
  --token-auth-file=/etc/kubernetes/token.csv \  
  --service-node-port-range=30000-32767 \  
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \  
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \  
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \  
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \  
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \  
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \  
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \  
  --service-account-issuer=api \  
  --etcd-cafile=/etc/etcd/ssl/ca.pem \  
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \  
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \  
  --etcd-servers=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379 \  
  --enable-swagger-ui=true \  
  --allow-privileged=true \  
  --apiserver-count=3 \  
  --audit-log-maxage=30 \  
  --audit-log-maxbackup=3 \  
  --audit-log-maxsize=100 \  
  --audit-log-path=/var/log/kube-apiserver-audit.log \  
  --event-ttl=1h \  
  --alsologtostderr=true \  
  --logtostderr=false \  
  --log-dir=/var/log/kubernetes \  
  --v=4"  
EOF
3.2.5.4 创建apiserver服务管理配置文件
# cat > /etc/systemd/system/kube-apiserver.service << "EOF"  
[Unit]  
Description=Kubernetes API Server  
Documentation=https://github.com/kubernetes/kubernetes  
After=etcd.service  
Wants=etcd.service  
  
[Service]  
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf  
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS  
Restart=on-failure  
RestartSec=5  
Type=notify  
LimitNOFILE=65536  
  
[Install]  
WantedBy=multi-user.target  
EOF
3.2.5.5 同步证书文件到集群其他节点
先复制ca证书到k8s-master1节点下  
# cp ca*.pem /etc/kubernetes/ssl/  
  
复制apiserver的证书到k8s-master1下  
# cp kube-apiserver*.pem /etc/kubernetes/ssl/  
  
复制token到k8s-master1下  
# cp token.csv /etc/kubernetes/  
  
将以上文件再同步到其他master节点  
# scp /etc/kubernetes/token.csv k8s-master2:/etc/kubernetes  
# scp /etc/kubernetes/token.csv k8s-master3:/etc/kubernetes  
  
# scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl  
# scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl  
  
# scp /etc/kubernetes/ssl/ca*.pem k8s-master2:/etc/kubernetes/ssl  
# scp /etc/kubernetes/ssl/ca*.pem k8s-master3:/etc/kubernetes/ssl  
  
同步配置文件  
# scp /etc/kubernetes/kube-apiserver.conf k8s-master2:/etc/kubernetes/kube-apiserver.conf  
# scp /etc/kubernetes/kube-apiserver.conf k8s-master3:/etc/kubernetes/kube-apiserver.conf  
注意这里需要修改其中的IP地址  
  
同步服务管理配置文件  
# scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service  
# scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service
3.2.5.6 启动并验证
# systemctl daemon-reload  
# systemctl enable --now kube-apiserver  
  
# systemctl status kube-apiserver  
  
# 测试  
curl --insecure https://192.168.10.12:6443/  
curl --insecure https://192.168.10.13:6443/  
curl --insecure https://192.168.10.14:6443/  
curl --insecure https://192.168.10.100:6443/

3.2.6 部署kubectl

3.2.6.1 创建kubectl证书请求文件
# cat > admin-csr.json << "EOF"  
{  
  "CN": "admin",  
  "hosts": [],  
  "key": {  
    "algo": "rsa",  
    "size": 2048  
      
  },  
  "names": [  
    {  
      "C": "CN",  
      "ST": "Beijing",  
      "L": "Beijing",  
      "O": "system:masters",               
      "OU": "system"  
    }  
  ]  
}  
EOF  
  
  
说明:  
  
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;  
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;  
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;  
注:  
这个admin 证书,是将来生成管理员用的kubeconfig 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;  
"O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
3.2.6.2 生成相关证书文件
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
3.2.6.3 复制到相关目录下
# cp admin*.pem /etc/kubernetes/ssl/
3.2.6.4 生成kubectl的kubeconfig
# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.10.100:6443 --kubeconfig=kube.config  
  
# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config  
  
# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config  
  
# kubectl config use-context kubernetes --kubeconfig=kube.config
3.2.6.5 准备kubectl配置文件并进行角色绑定
mkdir ~/.kube  
cp kube.config ~/.kube/config  
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config
3.2.6.6 查看集群状态
# export KUBECONFIG=$HOME/.kube/config  
  
# kubectl cluster-info  
# kubectl get componentstatuses  
# kubectl get all --all-namespaces
3.2.6.7 同步kubectl配置文件到其他节点
k8s-master2:  
# mkdir /root/.kube  
  
k8s-master3:  
# mkdir /root/.kube
# scp /root/.kube/config k8s-master2:/root/.kube/config  
# scp /root/.kube/config k8s-master3:/root/.kube/config
3.2.6.8 配置kubectl补全命令插件
# yum install -y bash-completion  
# source /usr/share/bash-completion/bash_completion  
# source <(kubectl completion bash)  
# kubectl completion bash > ~/.kube/completion.bash.inc  
# source '/root/.kube/completion.bash.inc'    
# source $HOME/.bash_profile

3.2.7 部署kube-controller-manager

3.2.7.1 创建controller-manager证书请求文件
# cat > kube-controller-manager-csr.json << "EOF"  
{  
    "CN""system:kube-controller-manager",  
    "key": {  
        "algo""rsa",  
        "size"2048  
    },  
    "hosts": [  
      "127.0.0.1",  
      "192.168.10.12",  
      "192.168.10.13",  
      "192.168.10.14"  
    ],  
    "names": [  
      {  
        "C""CN",  
        "ST""Beijing",  
        "L""Beijing",  
        "O""system:kube-controller-manager",  
        "OU""system"  
      }  
    ]  
}  
EOF  
  
说明:  
  
hosts 列表包含所有 kube-controller-manager 节点 IP;  
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
3.2.7.2 生成kube-controller-manager证书文件
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
# ls  
  
kube-controller-manager.csr       
kube-controller-manager-csr.json  
kube-controller-manager-key.pem  
kube-controller-manager.pem
3.2.7.3 创建kube-controller-manager的kube-controller-manager.kubeconfig
# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.10.100:6443 --kubeconfig=kube-controller-manager.kubeconfig  
  
# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig  
  
# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig  
  
# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
3.2.7.4 创建kube-controller-manager的配置文件
# cat > kube-controller-manager.conf << "EOF"  
KUBE_CONTROLLER_MANAGER_OPTS="--port=10252 \  
  --secure-port=10257 \  
  --bind-address=127.0.0.1 \  
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \  
  --service-cluster-ip-range=10.96.0.0/16 \  
  --cluster-name=kubernetes \  
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \  
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \  
  --allocate-node-cidrs=true \  
  --cluster-cidr=10.244.0.0/16 \  
  --experimental-cluster-signing-duration=87600h \  
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \  
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \  
  --leader-elect=true \  
  --feature-gates=RotateKubeletServerCertificate=true \  
  --controllers=*,bootstrapsigner,tokencleaner \  
  --horizontal-pod-autoscaler-use-rest-clients=true \  
  --horizontal-pod-autoscaler-sync-period=10s \  
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \  
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \  
  --use-service-account-credentials=true \  
  --alsologtostderr=true \  
  --logtostderr=false \  
  --log-dir=/var/log/kubernetes \  
  --v=2"  
EOF
3.2.7.5 创建kube-controller-manager的服务管理配置文件
# cat > kube-controller-manager.service << "EOF"  
[Unit]  
Description=Kubernetes Controller Manager  
Documentation=https://github.com/kubernetes/kubernetes  
  
[Service]  
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf  
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS  
Restart=on-failure  
RestartSec=5  
  
[Install]  
WantedBy=multi-user.target  
EOF
3.2.7.6 同步文件
复制到本节点相关目录下  
# cp kube-controller-manager*.pem /etc/kubernetes/ssl/  
# cp kube-controller-manager.kubeconfig /etc/kubernetes/  
# cp kube-controller-manager.conf /etc/kubernetes/  
# cp kube-controller-manager.service /usr/lib/systemd/system/
同步到其他节点  
# scp  kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/  
# scp  kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/  
# scp  kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master2:/etc/kubernetes/  
# scp  kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master3:/etc/kubernetes/  
# scp  kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/  
# scp  kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
#查看证书  
openssl x509 -in /etc/kubernetes/ssl/kube-controller-manager.pem -noout -text
3.2.7.7 启动服务
# systemctl daemon-reload   
# systemctl enable --now kube-controller-manager  
# systemctl status kube-controller-manager

3.2.8 部署kube-scheduler

3.2.8.1 创建kube-scheduler证书请求文件
# cat > kube-scheduler-csr.json << "EOF"  
{  
    "CN""system:kube-scheduler",  
    "hosts": [  
      "127.0.0.1",  
      "192.168.234.143",  
      "192.168.234.144",  
      "192.168.234.145"  
    ],  
    "key": {  
        "algo""rsa",  
        "size"2048  
    },  
    "names": [  
      {  
        "C""CN",  
        "ST""Beijing",  
        "L""Beijing",  
        "O""system:kube-scheduler",  
        "OU""system"  
      }  
    ]  
}  
EOF
3.2.8.2 生成kube-scheduler相关证书
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
# ls  
kube-scheduler.csr  
kube-scheduler-csr.json  
kube-scheduler-key.pem  
kube-scheduler.pem
3.2.8.3 生成kube-scheduler的kubeconfig
# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.234.100:6443 --kubeconfig=kube-scheduler.kubeconfig  
  
# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig  
  
# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig  
  
# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
3.2.8.4 创建kube-scheduler的配置文件
# cat > kube-scheduler.conf << "EOF"  
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \  
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \  
--leader-elect=true \  
--alsologtostderr=true \  
--logtostderr=false \  
--log-dir=/var/log/kubernetes \  
--v=2"  
EOF
3.2.8.5 创建kube-scheduler的服务管理配置文件
# cat > kube-scheduler.service << "EOF"  
[Unit]  
Description=Kubernetes Scheduler  
Documentation=https://github.com/kubernetes/kubernetes  
  
[Service]  
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf  
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS  
Restart=on-failure  
RestartSec=5  
  
[Install]  
WantedBy=multi-user.target  
EOF
3.2.8.6 同步文件
复制到本节点目录下  
# cp kube-scheduler*.pem /etc/kubernetes/ssl/  
# cp kube-scheduler.kubeconfig /etc/kubernetes/  
# cp kube-scheduler.conf /etc/kubernetes/  
# cp kube-scheduler.service /usr/lib/systemd/system/
同步到其他节点  
# scp  kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/  
# scp  kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/  
# scp  kube-scheduler.kubeconfig kube-scheduler.conf k8s-master2:/etc/kubernetes/  
# scp  kube-scheduler.kubeconfig kube-scheduler.conf k8s-master3:/etc/kubernetes/  
# scp  kube-scheduler.service k8s-master2:/usr/lib/systemd/system/  
# scp  kube-scheduler.service k8s-master3:/usr/lib/systemd/system/
3.2.8.7 启动服务
# systemctl daemon-reload  
# systemctl enable -now kube-scheduler  
# systemctl status kube-scheduler

至此,master的节点主要软件安装完成。总结一下,其实整个二进制部署过程有一下几个步骤:

  • 创建证书请求文件
  • 生成证书
  • 生成相关组件的kubeconfig文件
  • 创建相关组件的配置文件
  • 创建相关组件的服务管理配置文件
  • 同步到其他master节点
  • 启动

3.3 所有节点安装docker-ce

3.3.1 配置dockerYUM源

# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.3.2 安装docker并启动

# yum -y install docker-ce-20.10.17 docker-ce-cli-20.10.17  
  
# systemctl enable docker  
# systemctl start docker

3.3.3 修改docker配置并重启

# cat <{  
  "exec-opts": ["native.cgroupdriver=systemd"]  
}  
EOF  
  
重启docker  
# systemctl restart docker

3.4 Kubernetes集群worker部署

worker节点部署主要是需要kebelet,kube-proxy

3.4.1 部署kubelet及kube-proxy

3.4.1.1 部署kubelet

在master1上操作

# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)  
  
# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.234.100:6443 --kubeconfig=kubelet-bootstrap.kubeconfig  
  
# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig  
  
# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig  
  
# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap  
  
# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
# kubectl describe clusterrolebinding cluster-system-anonymous  
  
# kubectl describe clusterrolebinding kubelet-bootstrap
3.4.1.2 创建kubelet配置文件
# cat > kubelet.json << "EOF"  
{  
  "kind""KubeletConfiguration",  
  "apiVersion""kubelet.config.k8s.io/v1beta1",  
  "authentication": {  
    "x509": {  
      "clientCAFile""/etc/kubernetes/ssl/ca.pem"  
    },  
    "webhook": {  
      "enabled"true,  
      "cacheTTL""2m0s"  
    },  
    "anonymous": {  
      "enabled"false  
    }  
  },  
  "authorization": {  
    "mode""Webhook",  
    "webhook": {  
      "cacheAuthorizedTTL""5m0s",  
      "cacheUnauthorizedTTL""30s"  
    }  
  },  
  "address""192.168.10.12",  
  "port"10250,  
  "readOnlyPort"10255,  
  "cgroupDriver""systemd",                      
  "hairpinMode""promiscuous-bridge",  
  "serializeImagePulls"false,  
  "clusterDomain""cluster.local.",  
  "clusterDNS": ["10.96.0.2"]  
}  
EOF
3.4.1.3 创建kubelet服务配置文件
# cat > kubelet.service << "EOF"  
[Unit]  
Description=Kubernetes Kubelet  
Documentation=https://github.com/kubernetes/kubernetes  
After=docker.service  
Requires=docker.service  
  
[Service]  
WorkingDirectory=/var/lib/kubelet  
ExecStart=/usr/local/bin/kubelet \  
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \  
  --cert-dir=/etc/kubernetes/ssl \  
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \  
  --config=/etc/kubernetes/kubelet.json \  
  --network-plugin=cni \  
  --rotate-certificates \  
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \  
  --alsologtostderr=true \  
  --logtostderr=false \  
  --log-dir=/var/log/kubernetes \  
  --v=2  
Restart=on-failure  
RestartSec=5  
  
[Install]  
WantedBy=multi-user.target  
EOF
3.4.1.4 同步文件
复制到本地  
# cp kubelet-bootstrap.kubeconfig /etc/kubernetes/  
# cp kubelet.json /etc/kubernetes/  
# cp kubelet.service /usr/lib/systemd/system/
分发到其他节点  
for i in  k8s-master2 k8s-master3 k8s-worker1;do scp kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done  
  
for i in  k8s-master2 k8s-master3 k8s-worker1;do scp ca.pem $i:/etc/kubernetes/ssl/;done  
  
for i in k8s-master2 k8s-master3 k8s-worker1;do scp kubelet.service $i:/usr/lib/systemd/system/;done  
  
说明:  
kubelet.json中address需要修改为当前主机IP地址。
3.4.1.5 创建目录
# mkdir -p /var/lib/kubelet  
# mkdir -p /var/log/kubernetes
3.4.1.6 启动
# systemctl daemon-reload  
# systemctl enable --now kubelet  
  
# systemctl status kubelet
3.4.1.7 验证
# kubectl get nodes  
NAME          STATUS     ROLES    AGE     VERSION  
k8s-master1   NotReady      2m55s   v1.21.10  
k8s-master2   NotReady      45s     v1.21.10  
k8s-master3   NotReady      39s     v1.21.10  
k8s-worker1   NotReady      5m1s    v1.21.10
# kubectl get csr  
NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION  
csr-b949p   7m55s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued  
csr-c9hs4   3m34s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued  
csr-r8vhp   5m50s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued  
csr-zb4sr   3m40s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

3.4.2 部署kube-proxy

3.4.2.1 创建kube-proxy证书请求文件
# cat > kube-proxy-csr.json << "EOF"  
{  
  "CN""system:kube-proxy",  
  "key": {  
    "algo""rsa",  
    "size"2048  
  },  
  "names": [  
    {  
      "C""CN",  
      "ST""Beijing",  
      "L""Beijing",  
      "O""kubemsb",  
      "OU""CN"  
    }  
  ]  
}  
EOF
3.4.2.2 生成证书
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy  
  
查看生成的证书  
# ls kube-proxy*  
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
3.4.2.3 创建kubeconfig文件
# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.234.100:6443 --kubeconfig=kube-proxy.kubeconfig  
  
# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig  
  
# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig  
  
# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
3.4.2.4 创建服务配置文件
# cat > kube-proxy.yaml << "EOF"  
apiVersion: kubeproxy.config.k8s.io/v1alpha1  
bindAddress: 192.168.10.12  
clientConnection:  
  
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  
clusterCIDR: 10.244.0.0/16  
healthzBindAddress: 192.168.10.12:10256  
kind: KubeProxyConfiguration  
metricsBindAddress: 192.168.10.12:10249  
mode: "ipvs"  
EOF
3.4.2.5 创建服务管理配置文件
# cat >  kube-proxy.service << "EOF"  
[Unit]  
Description=Kubernetes Kube-Proxy Server  
Documentation=https://github.com/kubernetes/kubernetes  
After=network.target  
  
[Service]  
WorkingDirectory=/var/lib/kube-proxy  
ExecStart=/usr/local/bin/kube-proxy \  
  --config=/etc/kubernetes/kube-proxy.yaml \  
  --alsologtostderr=true \  
  --logtostderr=false \  
  --log-dir=/var/log/kubernetes \  
  --v=2  
Restart=on-failure  
RestartSec=5  
LimitNOFILE=65536  
  
[Install]  
WantedBy=multi-user.target  
EOF
3.4.2.6 同步文件
复制到本地  
# cp kube-proxy*.pem /etc/kubernetes/ssl/  
# cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/  
# cp kube-proxy.service /usr/lib/systemd/system/
同步到其他节点  
# for i in k8s-master2 k8s-master3 k8s-worker1;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done  
# for i in k8s-master2 k8s-master3 k8s-worker1;do scp  kube-proxy.service $i:/usr/lib/systemd/system/;done  
  
说明:  
同步后修改kube-proxy.yaml中IP地址为当前主机IP.
3.4.2.7 启动服务
# mkdir -p /var/lib/kube-proxy  
# systemctl daemon-reload  
# systemctl enable --now kube-proxy  
  
# systemctl status kube-proxy

3.5 安装网络组件Calico

3.5.1 下载Calico

# wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml

3.5.2 修改配置文件

# vim calico.yaml  
修改文件第368336843683             - name: CALICO_IPV4POOL_CIDR  
3684               value: "10.244.0.0/16"

3.5.3 应用配置文件

# kubectl create -f calico.yaml

3.5.4 验证结果

# kubectl get pods -A  
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE  
kube-system   calico-kube-controllers-7cc8dd57d9-tf2m5   1/1     Running   0          72s  
kube-system   calico-node-llw5w                          1/1     Running   0          72s  
kube-system   calico-node-mhh6g                          1/1     Running   0          72s  
kube-system   calico-node-twj99                          1/1     Running   0          72s  
kube-system   calico-node-zh6xl                          1/1     Running   0          72s
# kubectl get nodes  
NAME          STATUS   ROLES    AGE   VERSION  
k8s-master1   Ready       55m   v1.21.10  
k8s-master2   Ready       53m   v1.21.10  
k8s-master3   Ready       53m   v1.21.10  
k8s-worker1   Ready       57m   v1.21.10

3.6 部署CoreDNS

3.6.1 创建CoreDNS配置文件

# cat >  coredns.yaml << "EOF"  
apiVersion: v1  
kind: ServiceAccount  
metadata:  
  name: coredns  
  namespace: kube-system  
---  
apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRole  
metadata:  
  labels:  
    kubernetes.io/bootstrapping: rbac-defaults  
  name: system:coredns  
rules:  
  - apiGroups:  
    - ""  
    resources:  
    - endpoints  
    - services  
    - pods  
    - namespaces  
    verbs:  
    - list  
    - watch  
  - apiGroups:  
    - discovery.k8s.io  
    resources:  
    - endpointslices  
    verbs:  
    - list  
    - watch  
---  
apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRoleBinding  
metadata:  
  annotations:  
    rbac.authorization.kubernetes.io/autoupdate: "true"  
  labels:  
    kubernetes.io/bootstrapping: rbac-defaults  
  name: system:coredns  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: system:coredns  
subjects:  
- kind: ServiceAccount  
  name: coredns  
  namespace: kube-system  
---  
apiVersion: v1  
kind: ConfigMap  
metadata:  
  name: coredns  
  namespace: kube-system  
data:  
  Corefile: |  
    .:53 {  
        errors  
        health {  
          lameduck 5s  
        }  
        ready  
        kubernetes cluster.local  in-addr.arpa ip6.arpa {  
          fallthrough in-addr.arpa ip6.arpa  
        }  
        prometheus :9153  
        forward . /etc/resolv.conf {  
          max_concurrent 1000  
        }  
        cache 30  
        loop  
        reload  
        loadbalance  
    }  
---  
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: coredns  
  namespace: kube-system  
  labels:  
    k8s-app: kube-dns  
    kubernetes.io/name: "CoreDNS"  
spec:  
  # replicas: not specified here:  
  # 1. Default is 1.  
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  
  strategy:  
    type: RollingUpdate  
    rollingUpdate:  
      maxUnavailable: 1  
  selector:  
    matchLabels:  
      k8s-app: kube-dns  
  template:  
    metadata:  
      labels:  
        k8s-app: kube-dns  
    spec:  
      priorityClassName: system-cluster-critical  
      serviceAccountName: coredns  
      tolerations:  
        - key: "CriticalAddonsOnly"  
          operator: "Exists"  
      nodeSelector:  
        kubernetes.io/os: linux  
      affinity:  
         podAntiAffinity:  
           preferredDuringSchedulingIgnoredDuringExecution:  
           - weight: 100  
             podAffinityTerm:  
               labelSelector:  
                 matchExpressions:  
                   - key: k8s-app  
                     operator: In  
                     values: ["kube-dns"]  
               topologyKey: kubernetes.io/hostname  
      containers:  
      - name: coredns  
        image: coredns/coredns:1.8.4  
        imagePullPolicy: IfNotPresent  
        resources:  
          limits:  
            memory: 170Mi  
          requests:  
            cpu: 100m  
            memory: 70Mi  
        args: [ "-conf""/etc/coredns/Corefile" ]  
        volumeMounts:  
        - name: config-volume  
          mountPath: /etc/coredns  
          readOnly: true  
        ports:  
        - containerPort: 53  
          name: dns  
          protocol: UDP  
        - containerPort: 53  
          name: dns-tcp  
          protocol: TCP  
        - containerPort: 9153  
          name: metrics  
          protocol: TCP  
        securityContext:  
          allowPrivilegeEscalation: false  
          capabilities:  
            add:  
            - NET_BIND_SERVICE  
            drop:  
            - all  
          readOnlyRootFilesystem: true  
        livenessProbe:  
          httpGet:  
            path: /health  
            port: 8080  
            scheme: HTTP  
          initialDelaySeconds: 60  
          timeoutSeconds: 5  
          successThreshold: 1  
          failureThreshold: 5  
        readinessProbe:  
          httpGet:  
            path: /ready  
            port: 8181  
            scheme: HTTP  
      dnsPolicy: Default  
      volumes:  
        - name: config-volume  
          configMap:  
            name: coredns  
            items:  
            - key: Corefile  
              path: Corefile  
---  
apiVersion: v1  
kind: Service  
metadata:  
  name: kube-dns  
  namespace: kube-system  
  annotations:  
    prometheus.io/port: "9153"  
    prometheus.io/scrape: "true"  
  labels:  
    k8s-app: kube-dns  
    kubernetes.io/cluster-service: "true"  
    kubernetes.io/name: "CoreDNS"  
spec:  
  selector:  
    k8s-app: kube-dns  
  clusterIP: 10.96.0.2  
  ports:  
  - name: dns  
    port: 53  
    protocol: UDP  
  - name: dns-tcp  
    port: 53  
    protocol: TCP  
  - name: metrics  
    port: 9153  
    protocol: TCP  
   
EOF

3.6.2 应用配置文件

# kubectl create -f coredns.yaml

3.6.3 验证

# kubectl get pods -A  
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE  
kube-system   calico-kube-controllers-7cc8dd57d9-tf2m5   1/1     Running   0          4m7s  
kube-system   calico-node-llw5w                          1/1     Running   0          4m7s  
kube-system   calico-node-mhh6g                          1/1     Running   0          4m7s  
kube-system   calico-node-twj99                          1/1     Running   0          4m7s  
kube-system   calico-node-zh6xl                          1/1     Running   0          4m7s  
kube-system   coredns-675db8b7cc-ncnf6                   1/1     Running   0          26s

至此,k8s二进制部署集群完成。


4.1 安装应用验证集群

这里,我们安装一个nginx作为集群可用的验证

4.1.1 创建nginx配置文件

# cat >  nginx.yaml  << "EOF"  
---  
apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: nginx-web  
spec:  
  replicas: 2  
  selector:  
    name: nginx  
  template:  
    metadata:  
      labels:  
        name: nginx  
    spec:  
      containers:  
        - name: nginx  
          image: nginx:1.19.6  
          ports:  
            - containerPort: 80  
---  
apiVersion: v1  
kind: Service  
metadata:  
  name: nginx-service-nodeport  
spec:  
  ports:  
    - port: 80  
      targetPort: 80  
      nodePort: 30001  
      protocol: TCP  
  type: NodePort  
  selector:  
    name: nginx  
EOF

4.1.2 应用配置文件

# kubectl create -f nginx.yaml

4.1.3 验证nginx是否可用

# kubectl get pods -o wide  
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE          NOMINATED NODE   READINESS GATES  
nginx-web-qzvw4   1/1     Running   0          58s   10.244.194.65   k8s-worker1                
nginx-web-spw5t   1/1     Running   0          58s   10.244.224.1    k8s-master2
# kubectl get all  
NAME                         READY   STATUS    RESTARTS   AGE  
pod/nginx-web-qzvw4   1/1     Running   0          2m2s  
pod/nginx-web-spw5t   1/1     Running   0          2m2s  
  
NAME                                     DESIRED   CURRENT   READY   AGE  
replicationcontroller/nginx-web   2         2         2       2m2s  
  
NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE  
service/kubernetes               ClusterIP   10.96.0.1               443/TCP        3h37m  
service/nginx-service-nodeport   NodePort    10.96.165.114           80:30001/TCP   2m2s

image.png