安装前准备
服务器
| 名称 | 地址 |
|---|---|
| 服务器kmaster1 | 192.168.20.7 |
| 服务器kmaster2 | 192.168.20.8 |
| 服务器kmaster3 | 192.168.20.9 |
| 服务器knode1 | 192.168.20.12 |
| 负责均衡lb | 192.168.20.10 |
关闭防火墙
systemctl disable firewalld --now
关闭selinux
getenforce
setenforce 0
sestatus
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
时间同步
timedatectl set-timezone Asia/Shanghai
systemctl enable chronyd --now
设置主机名
hostnamectl set-hostname kmaster1
hostnamectl set-hostname kmaster2
hostnamectl set-hostname kmaster3
配置dns解析
lb.k8s.cn用于nginx负责均衡kube-apiserver。
192.168.20.7 kmaster1.cluster.local kmaster1
192.168.20.8 kmaster2.cluster.local kmaster2
192.168.20.9 kmaster3.cluster.local kmaster3
192.168.20.10 lb.k8s.cn
ipvs管理工具安装以及模块加载
yum install -y ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack
开启主机内核路由转发及网桥过滤
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
将br_netfilter模块添加到开机加载
cat > /etc/modules-load.d/containerd.conf <<EOF
br_netfilter
EOF
关闭主机swap分区
sed -i 's/^\(.*swap.*\)$/#\1/g' /etc/fstab
swapoff -a
非对称加密
明文通信是不安全的,比如http通信,于是引入了https。而https就是典型的证书应用场景。我们在配置nginx的https的时候就会去购买一个证书。证书往往是这样cert.pem,cert-key.pem。其中cert.pem是证书,cert-key.pem是私钥。签发证书的CA机构持有CA的私钥。浏览器安装了CA机构的公钥。公钥加密的数据只有对应的私钥才能解开,反之亦然。
证书如何保证通信安全
浏览器访问 www.aaa.com 首先会验证该网站的证书。浏览器通过提前安装的CA公钥解密该证书。如果能解密说明可信。然后查看证书里面附带的信息比如hosts如果hosts里面包含www.aaa.com而且证书没有过期。浏览器会认为该网站是安全的可以进行通信。
安装CFSSL工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
cfssl version
创建根证书目录
在kmaster1执行。
mkdir -p /etc/kubernetes/pki
cd /etc/kubernetes/pki
创建根CA
生成的根CA是这样的ca-key.pem(私钥)ca.pem(证书)。这里要说明一点证书里面是包含了公钥的。当然除了公钥还有其他的一些信息比如hosts等。
cat > ca-csr.json <<EOF
{
"CN": "kubernetes-ca",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "Kubernetes",
"OU": "CA"
}
],
"expiry": "87600h"
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
安装etcd集群
etcd简介
etcd是一个高性能、分布式的键值存储系统。采用了raft一致性算法,实现强一致性。
生成etcd证书
将生成的证书分发到etcd的其他节点,证书目录为/etc/kubernetes/pki。
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.20.7",
"192.168.20.8",
"192.168.20.9"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "Kubernetes",
"OU": "Etcd"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=etcd \
etcd-csr.json | cfssljson -bare etcd-server
创建etcd数据目录
mkdir /var/lib/etcd
下载并安装etcd
wget https://mirrors.huaweicloud.com/etcd/v3.6.0/etcd-v3.6.0-linux-amd64.tar.gz
tar xzvf etcd-v3.6.0-linux-amd64.tar.gz
mv etcd-v3.6.0-linux-amd64/etcd* /usr/local/bin/
配置文件
mkdir -p /etc/etcd
vim /etc/etcd/etcd.conf.yml
name: 'etcd1' # 替换为当前节点的名字,比如etcd1, etcd2
data-dir: /var/lib/etcd
initial-advertise-peer-urls: 'https://192.168.20.7:2380' # 当前节点IP地址
advertise-client-urls: 'https://192.168.20.7:2379' # 当前节点IP地址
listen-peer-urls: 'https://0.0.0.0:2380'
listen-client-urls: 'https://0.0.0.0:2379'
initial-cluster: 'etcd1=https://192.168.20.7:2380,etcd2=https://192.168.20.8:2380,etcd3=https://192.168.20.9:2380' # 所有节点信息
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-1'
peer-transport-security:
cert-file: /etc/kubernetes/pki/etcd-server.pem
key-file: /etc/kubernetes/pki/etcd-server-key.pem
trusted-ca-file: /etc/kubernetes/pki/ca.pem
client-cert-auth: true
client-transport-security:
cert-file: /etc/kubernetes/pki/etcd-server.pem
key-file: /etc/kubernetes/pki/etcd-server-key.pem
trusted-ca-file: /etc/kubernetes/pki/ca.pem
client-cert-auth: true
通过systemd管理
cat > /etc/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Service
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.conf.yml
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd --now
systemctl status etcd
查看etcd状态
ETCDCTL_API=3 etcdctl \
--cacert=/etc/kubernetes/pki/ca.pem \
--cert=/etc/kubernetes/pki/etcd-server.pem \
--key=/etc/kubernetes/pki/etcd-server-key.pem \
--endpoints=https://192.168.20.7:2379 \
member list -w table
ETCDCTL_API=3 etcdctl \
--cacert=/etc/kubernetes/pki/ca.pem \
--cert=/etc/kubernetes/pki/etcd-server.pem \
--key=/etc/kubernetes/pki/etcd-server-key.pem \
--endpoints=https://192.168.20.7:2379 \
endpoint status -w table
部署kube-apiserver
创建ca-config.json
cd /etc/kubernetes/pki
cat > ca-config.json <<'EOF'
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
生成证书请求apiserver-csr.json
cat > apiserver-csr.json <<'EOF'
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.20.7",
"192.168.20.8",
"192.168.20.9",
"192.168.20.10",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster.local",
"lb.k8s.cn"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
签发apiserver证书
生成apiserver.pem(证书) apiserver-key.pem(私钥)
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
apiserver-csr.json | cfssljson -bare apiserver
生成front-proxy-client证书
front-proxy-client是一个客户端认证证书。它主要用于apiserver作为客户端去请求后端服务的认证。因为apiserver是核心组件,k8s团队秉承核心组件功能专一,避免臃肿,k8s功能可扩展的原则。所以像监控、日志、网络等集群常用的但是非核心的功能通过注册到/apis/路由下。最典型的例子是metrics-server,它作为扩展功能提供给apiserver访问。在访问过程中要求证书验证。
cat > front-proxy-client-csr.json <<'EOF'
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare front-proxy-client
生成Service Account密钥对
service account秘钥对的作用。k8s的最小管理单元是pod。如何验证pod的合法性。apiserver持有sa.pub公钥验证pod的token。controller-manager持有sa.key私钥给pod签名token。
openssl genrsa -out sa.key 2048
openssl rsa -in sa.key -pubout -out sa.pub
分发证书到所有master节点
for ip in 192.168.20.7 192.168.20.8 192.168.20.9; do
scp ca.pem apiserver*.pem front-proxy-client*.pem sa.* root@$ip:/etc/kubernetes/pki
done
下载k8s组件
如果你想下载特定版本修改v1.32.0为你想要的版本。
wget https://dl.k8s.io/v1.32.0/kubernetes-server-linux-amd64.tar.gz
解压安装k8s组件
这里我们把kube-apiserver kube-controller-manager kube-scheduler kubectl一起放到了/usr/local/bin下。
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
创建配置目录
/etc/kubernetes/manifests是存放kebelet管理的静态pod的。静态pod只由kubelet管理,不归kube-api-server管理,但是可以查看。
mkdir -p /etc/kubernetes/{pki,manifests}
mkdir -p /var/lib/kubernetes
创建kube-apiserver systemd服务
--advertise-address=192.168.20.7这个ip地址是本机的ip地址。
cat > /etc/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=192.168.20.7 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--etcd-servers=https://192.168.20.7:2379,https://192.168.20.8:2379,https://192.168.20.9:2379 \
--etcd-cafile=/etc/kubernetes/pki/ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-server.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--authorization-mode=Node,RBAC \
--enable-admission-plugins=NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--requestheader-client-ca-file=/etc/kubernetes/pki/ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--enable-bootstrap-token-auth=true \
--runtime-config=rbac.authorization.k8s.io/v1 \
--audit-log-path=/var/log/kubernetes/audit.log \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--v=2
Restart=always
RestartSec=5
LimitNOFILE=65536
User=root
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver --now
systemctl status kube-apiserver
部署nginx实现负载均衡
nginx配置
对于nginx的部署这里省略了,因为本文主要讲解二进制部署k8s。这里是nginx对apiserver做的四层负载均衡。
user snt;
worker_processes auto;
error_log /snt/soft/nginx/nginx-install/logs/error.log;
pid /snt/soft/nginx/nginx-install/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
upstream klb {
server 192.168.20.7:6443 max_fails=3 fail_timeout=30s;
server 192.168.20.8:6443 max_fails=3 fail_timeout=30s;
server 192.168.20.9:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 6443;
proxy_pass klb;
}
}
部署kube-controller-manager
创建controller-manager-csr.json 并生成证书
我们所有生成证书的服务器都是在kmaster1。
cd /etc/kubernetes/pki
cat > controller-manager-csr.json <<'EOF'
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
controller-manager-csr.json | cfssljson -bare controller-manager
创建scheduler-csr.json 并生成证书
cat > scheduler-csr.json <<'EOF'
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare scheduler
分发证书到所有master节点
for ip in 192.168.20.7 192.168.20.8 192.168.20.9; do
scp controller-manager*.pem scheduler*.pem root@$ip:/etc/kubernetes/pki/
done
systemd管理controller-manager
cat > /etc/systemd/system/kube-controller-manager.service <<'EOF'
[Unit]
Description=Kubernetes Controller Manager
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--master=https://lb.k8s.cn:6443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--requestheader-client-ca-file=/etc/kubernetes/pki/ca.pem \
--bind-address=127.0.0.1 \
--v=2
--feature-gates=TTLAfterFinished=true
--controllers=*,bootstrapsigner,tokencleaner
Restart=always
RestartSec=5
User=root
[Install]
WantedBy=multi-user.target
EOF
创建kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://lb.k8s.cn:6443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager --now
systemctl status kube-controller-manager
部署kube-scheduler
systemd管理kube-scheduler
cat > /etc/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=Kubernetes Scheduler
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--master=https://lb.k8s.cn:6443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--leader-elect=true \
--bind-address=127.0.0.1 \
--v=2
Restart=always
RestartSec=5
User=root
[Install]
WantedBy=multi-user.target
EOF
创建kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://lb.k8s.cn:6443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
启动服务
systemctl daemon-reload
systemctl enable kube-scheduler --now
systemctl status kube-scheduler
部署kubectl
admin用户生成证书
cd /etc/kubernetes/pki
cat > admin-csr.json <<'EOF'
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
分发admin证书到各个master节点
你也可以通过上传下载工具实现。
scp admin*.pem root@ip:/etc/kubernetes/pki/
创建admin kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://lb.k8s.cn:6443 \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-context admin@kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config use-context admin@kubernetes \
--kubeconfig=/etc/kubernetes/admin.conf
#授权admin为最高权限用户。
kubectl create clusterrolebinding admin-cluster-binding \
--clusterrole=cluster-admin \
--user=admin
设置kubectl默认使用
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
将节点注册成为node
kubectl get nodes
#No resources found
#master节点默认不把自己注册成node导致
#部署containerd。
#配置yum源,根据你自己的系统,我的系统是almalinux9.3。
cd /etc/yum.repos.d
sudo tee /etc/yum.repos.d/almalinux-vault.repo <<'EOF'
[baseos]
name=AlmaLinux 9.3 - BaseOS - Vault
baseurl=https://repo.almalinux.org/vault/9.3/BaseOS/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=https://repo.almalinux.org/vault/9.3/RPM-GPG-KEY-AlmaLinux-9
[appstream]
name=AlmaLinux 9.3 - AppStream - Vault
baseurl=https://repo.almalinux.org/vault/9.3/AppStream/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=https://repo.almalinux.org/vault/9.3/RPM-GPG-KEY-AlmaLinux-9
[extras]
name=AlmaLinux 9.3 - Extras - Vault
baseurl=https://repo.almalinux.org/vault/9.3/extras/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=https://repo.almalinux.org/vault/9.3/RPM-GPG-KEY-AlmaLinux-9
EOF
dnf clean all
dnf makecache
dnf update -y
sudo tee /etc/yum.repos.d/docker-ce.repo <<'EOF'
[docker-ce-stable]
name=Docker CE Stable - Alibaba Cloud
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/9/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
dnf install -y containerd.io
systemctl enable containerd --now
systemctl status containerd
#部署kubelet
cd
cp kubernetes/server/bin/kubelet /usr/local/bin
#生成token
cd
export TOKEN_ID=$(openssl rand -hex 3)
export TOKEN_SECRET=$(openssl rand -hex 8)
export BOOTSTRAP_TOKEN="${TOKEN_ID}.${TOKEN_SECRET}"
echo ${BOOTSTRAP_TOKEN} > token.txt
#创建Bootstrap Token Secret用于让新节点kubelet在首次加入集群时通过Token认证。
#完成TLS Bootstrapping(自动申请证书并注册为Node)。
cat > bootstrap-token-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${TOKEN_ID}
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
token-id: "${TOKEN_ID}"
token-secret: "${TOKEN_SECRET}"
auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
expiration: $(date -d '+24 hours' -u +%Y-%m-%dT%H:%M:%SZ)
description: "Bootstrap token for new nodes"
EOF
kubectl apply -f bootstrap-token-secret.yaml
kubectl get secret -n kube-system | grep bootstrap-token
#自动批准首次bootstrap CSR。
cat > auto-approve-node-client-csr.yaml <<'EOF'
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-node-client-csr
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f auto-approve-node-client-csr.yaml
#自动批准后续续期CSR
cat > auto-approve-selfnode-client-csr.yaml <<'EOF'
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-selfnode-client-csr
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f auto-approve-selfnode-client-csr.yaml
#创建bootstrap-kubeconfig。
#查看之前生成的token.txt。
export BOOTSTRAP_TOKEN="bad6fd.aefc8929b8e16f4c"
#这里是apiserver的负载均衡代理地址。
export APISERVER="https://lb.k8s.cn:6443"
#ca.pem的地址。
export CA_CERT="/etc/kubernetes/pki/ca.pem"
kubectl config set-cluster kubernetes \
--certificate-authority=${CA_CERT} \
--embed-certs=true \
--server=${APISERVER} \
--kubeconfig=/etc/kubernetes/bootstrap-kubeconfig
kubectl config set-credentials system:bootstrap:$(echo ${BOOTSTRAP_TOKEN} | cut -d. -f1) \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/etc/kubernetes/bootstrap-kubeconfig
kubectl config set-context system:bootstrap:$(echo ${BOOTSTRAP_TOKEN} | cut -d. -f1)@kubernetes \
--cluster=kubernetes \
--user=system:bootstrap:$(echo ${BOOTSTRAP_TOKEN} | cut -d. -f1) \
--kubeconfig=/etc/kubernetes/bootstrap-kubeconfig
kubectl config use-context system:bootstrap:$(echo ${BOOTSTRAP_TOKEN} | cut -d. -f1)@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubeconfig
#查看/etc/kubernetes/bootstrap-kubeconfig文件是否生成。
ls -l /etc/kubernetes/bootstrap-kubeconfig
#如果bootstrap-kubeconfig已经创建,后面的节点添加只需要复制bootstrap-kubeconfig到/etc/kubernetes。
#然后直接跳过生成bootstrap-kubeconfig相关的步骤。
#kubelet配置文件。
cat > /etc/kubernetes/kubelet.yml <<'EOF'
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
cgroupDriver: systemd
clusterDNS: ["10.96.0.10"]
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
enabled: true
authorization:
mode: Webhook
EOF
#systemd管理kubelet。
cat > /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--hostname-override=$(hostname -s) \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubeconfig \
--cert-dir=/etc/kubernetes/pki \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.yml \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now kubelet
#kubectl get node
#这里的NotReady是因为没有部署网络插件。
#NAME STATUS ROLES AGE VERSION
#kmaster1 NotReady <none> 2m50s v1.32.0
部署kube-proxy
#在work节点。
cd
cp kubernetes/server/bin/kube-proxy /usr/local/bin/
#创建证书。
#在kmaster1执行,因为这里面已经生成了ca.pem和一些配置参数。
cd /etc/kubernetes/pki
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
#证书创建完成后上传证书到work节点和其他master节点 /etc/kubernetes/pki。
#创建kubeconfig文件。
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://lb.k8s.cn:6443 \
--kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
kubectl config set-credentials system:kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
kubectl config set-context kube-proxy \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
kubectl config use-context kube-proxy --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
#创建 kube-proxy 的配置文件 /etc/kubernetes/kube-proxy-config.yaml
cat > /etc/kubernetes/kube-proxy-config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig.conf"
mode: "ipvs"
EOF
#systemd管理。
cat > /etc/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy --now
sudo systemctl status kube-proxy
部署网络flannel
wget https://gitee.com/longjiangkai/k8s/raw/master/kube-flannel.yml
kubectl apply -f kube-flannel.yml
#查看状态
kubectl get pod -n kube-flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-497cq 0/1 Completed 5 24m
kube-flannel-ds-6qxg8 0/1 Completed 6 24m
kube-flannel-ds-7sk5h 0/1 Completed 5 24m
kube-flannel-ds-g9jjk 0/1 Completed 5 23m
部署CoreDNS
#下载官方YAML(已替换为国内镜像)
wget https://gitee.com/longjiangkai/k8s/raw/master/coredns.yml
kubectl apply -f coredns.yml
#查看状态
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66d689954-kwl25 1/1 Running 12 (3m8s ago) 67m
coredns-66d689954-lrtbk 1/1 Running 10 (10m ago) 67m
总结
二进制部署k8s确实是有些复杂,但是它可以让你更加的了解k8s的各个组件的作用和他们之间是如何配合的。大型项目必定采用分布式微服务,这是趋势。而分布式微服务必然会用到服务编排。所以学好k8s是必要的。