kubeadm 部署 K8S 集群架构
一、kubeadm 部署 K8S 集群架构
| 主机名 | IP地址 | 安装组件 |
|---|---|---|
| master(2C/4G,cpu核心数要求大于2) | 192.168.237.70 | docker、kubeadm、kubelet、kubectl、flannel |
| node01(2C/2G) | 192.168.237.80 | docker、kubeadm、kubelet、kubectl、flannel |
| node02(2C/2G) | 192.168.237.90 | docker、kubeadm、kubelet、kubectl、flannel |
| Harbor节点(hub.lsq.com) | 192.168.237.66 | docker、docker-compose、harbor-offline-v1.2.2 |
部署的总体步骤如下:
1、在所有节点上安装Docker和kubeadm
2、部署Kubernetes Master
3、部署容器网络插件
4、部署 Kubernetes Node,将节点加入Kubernetes集群中
5、部署 Dashboard Web 页面,可视化查看Kubernetes资源
6、部署 Harbor 私有仓库,存放镜像资源
1、环境准备
#所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a #交换分区必须要关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
#关闭swap,k8s不用swap
swapoff -a
#自动挂载交换分区注释掉
sed -ri 's/.*swap.*/#&/' /etc/fstab
#修改主机名
#hostnamectl set-hostname master
#hostnamectl set-hostname node1
#hostnamectl set-hostname node2
#所有节点修改hosts文件
cat >> /etc/hosts<<-EOF
192.168.0.102 master01
192.168.0.103 node01
192.168.0.104 node02
EOF
#调整内核参数
cat > /etc/sysctl.d/kubernetes.conf <<-EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOFforward=1
EOF
#生效参数
/usr/sbin/sysctl --system
2、所有节点安装docker
rm -f /var/run/yum.pid
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
mkdir /etc/docker
cat > /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service
docker info | grep "Cgroup Driver"
#Cgroup Driver: systemd
#采用shell脚本安装docker并传到其它主机上运行
3、所有节点安装kubeadm,kubelet和kubectl
#定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.11 kubeadm-1.20.11 kubectl-1.20.11
#开机自启kubelet
systemctl enable kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启
3、部署K8S集群
在所有节点上操作
#查看初始化需要的镜像
kubeadm config images list
mkdir /opt/k8s
cd /opt/k8s
#在 master 节点上传 v1.20.11.zip 压缩包至 /opt 目录
unzip v1.20.11.zip -d /opt/k8s
#复制镜像和脚本到 node 节点,并在 node 节点上执行脚本加载镜像文件
scp -r /opt/k8s root@node01:/opt
scp -r /opt/k8s root@node02:/opt
cd /opt/k8s/v1.20.11
#遍历创建镜像
for i in $(ls *.tar); do docker load -i $i; done
master节点
#初始化kubeadm
#方法一:
kubeadm config print init-defaults > /opt/kubeadm-config.yaml
cd /opt/
cat > kubeadm-config.yaml<<-EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.102
#指定master节点的IP地址
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.11
#指定kubernetes版本号
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
#指定pod网段,10.244.0.0/16用于匹配flannel默认网段,要和你的cni网段一致
serviceSubnet: 10.96.0.0/12
#指定service网段
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
#把默认的service调度方式改为ipvs模式
EOF
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件,K8S V1.16版本开始替换为 --upload-certs
#tee kubeadm-init.log 用以输出日志
#查看 kubeadm-init 日志
less kubeadm-init.log
#kubernetes配置文件目录
ls /etc/kubernetes/
#存放ca等证书和密码的目录
ls /etc/kubernetes/pki
方法二:
#我们不用配置文件,我们用参数来搞
kubeadm init \
--apiserver-advertise-address=192.168.80.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.11 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
--------------------------------------------------------------------------------------------
初始化集群需使用kubeadm init命令,可以指定具体参数初始化,也可以指定配置文件初始化。
可选参数:
--apiserver-advertise-address:apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
--apiserver-bind-port:apiserver的监听端口,默认是6443
--cert-dir:通讯的ssl证书文件,默认/etc/kubernetes/pki
--control-plane-endpoint:控制台平面的共享终端,可以是负载均衡的ip地址或者dns域名,高可用集群时需要添加
--image-repository:拉取镜像的镜像仓库,默认是k8s.gcr.io
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16;
--service-cidr:service资源的网段
--service-dns-domain:service全域名的后缀,默认是cluster.local
--token-ttl:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数
---------------------------------------------------------------------------------------------
方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs
#里面有配置文件
cd /etc/kubernetes/
#里面有证书文件
cd pki/
#设置流量调度模式为ipvs
kubectl edit configmap kube-porxy -n kube-system
#进入后将mode设定为ipvs
mode: "ipvs"
提示:
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.80.10:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
--discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2
#设定kubectl
#kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#进入node节点执行上面命令生成的证书,每台机器不一样,请注意
kubeadm join 192.168.0.102:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ff32a4aab46e740d85675616be81204f41907f728ed4ce9ce7893fb241ad1ec6
kubectl get node
#此时发现notready,要放入网络插件
kubectl get cs
#此时发现无法进入健康检查
kubectl get pods -A
#自动安装好
master节点健康恢复
kubectl get cs
#如果 kubectl get cs 发现集群不健康,更改以下两个文件
cd /etc/kubernetes/manifests/
#修改如下内容
sed -i "s/--bind-address=127.0.0.1/--bind-address=192.168.0.102/g" kube-scheduler.yaml
sed -i "s/--bind-address=127.0.0.1/--bind-address=192.168.0.102/g" kube-controller-manager.yaml
#修改成k8s的控制节点master01的ip
sed -i "s/host: 127.0.0.1/host: 192.168.0.102/g" kube-scheduler.yaml
sed -i "s/host: 127.0.0.1/host: 192.168.0.102/g" kube-controller-manager.yaml
#把httpGet:字段下的两个探针
#(有两处要改)
sed -i "s/- --port=0/#- --port=0/g" kube-scheduler.yaml
sed -i "s/- --port=0/#- --port=0/g" kube-controller-manager.yaml
#把这一行注释掉
systemctl restart kubelet
kubectl get cs
#看一下健康恢复没
所有节点部署网络插件flannel
#所有节点上操作,3台机器同样操作
#部署网络插件flannel
方法一:
#所有节点上传flannel镜像 flannel.tar 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
把flannel.tar包传进来
scp flannel.tar node01:/opt
scp flannel.tar node02:/opt
#一定要是1.14版本
docker load -i flannel.tar
#然后我们执行我们的配置文件
把kube-flannel.yml传到opt下面
#在 master 节点创建 flannel 资源
kubectl apply -f kube-flannel.yml
方法二:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
完成验证
#在master节点查看节点状态(需要等几分钟)
[root@master opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 44m v1.15.1
node1 Ready <none> 17m v1.15.1
node2 Ready <none> 15m v1.15.1
[root@master opt]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-cqm68 1/1 Running 0 5m13s
coredns-5c98db65d4-lm9z5 1/1 Running 0 5m13s
etcd-master 1/1 Running 0 4m28s
kube-apiserver-master 1/1 Running 0 4m13s
kube-controller-manager-master 1/1 Running 0 4m6s
kube-flannel-ds-amd64-7vhjw 1/1 Running 0 23s
kube-flannel-ds-amd64-nhpr4 1/1 Running 0 23s
kube-flannel-ds-amd64-tjnrn 1/1 Running 0 23s
kube-proxy-5pcdb 1/1 Running 0 2m28s
kube-proxy-cxlt2 1/1 Running 0 2m34s
kube-proxy-hbfxc 1/1 Running 0 5m13s
kube-scheduler-master 1/1 Running 0 4m21s
测试
主节点上操作
#测试 pod 资源创建
kubectl create deployment myapp-ky18 --image=soscscs/myapp:v1
#查询创建pod
kubectl get all
kubectl get node
#此时发现notready,要放入网络插件
kubectl get cs
#此时发现无法进入健康检查
#暴露端口提供服务,外网可以访问集群内部
kubectl expose deployment myapp-ky18 --port=80 --type=NodePort
#查看暴露的端口是多少
kubectl get svc
#测试访问,每台机器不一样,详情查看上面暴露的端口
curl http://192.168.0.102:31414
#扩展3个副本,如果你一个pod不够用,可以扩容
kubectl scale deployment nginx --replicas=3
#查看扩容的pod
kubectl get pods -o wide
#删减成一个测试一下
kubectl scale deployment nginx --replicas=1
#查看删减后的pod
kubectl get pods -o wide
#所以pod有生命周期,我想让他生让他死都行
二、dashboard 部署
1、 安装dashboard
所有节点操作
#所有节点安装dashboard
方法一:
#所有节点上传dashboard镜像 dashboard.tar 到 /opt 目录,master节点上传recommended.yaml文件
cd /opt
上传dashboard.tar
上传metrics-scraper.tar
上传recommended.yaml
docker load -i dashboard.tar
docker load -i metrics-scraper.tar
kubectl apply -f recommended.yaml
方法二:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
#查看
kubectl get pods -A
===================
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-2b9dm 1/1 Running 0 52s
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-f56xs 1/1 Running 0 52s
====================
#查看所有容器运行状态
kubectl get pods,svc -n kube-system -o wide
[root@master opt]# kubectl get pods,svc -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/coredns-5c98db65d4-cqm68 1/1 Running 0 25m 10.244.2.3 node2 <none> <none>
pod/coredns-5c98db65d4-lm9z5 1/1 Running 0 25m 10.244.2.2 node2 <none> <none>
pod/etcd-master 1/1 Running 0 24m 192.168.237.70 master <none> <none>
pod/kube-apiserver-master 1/1 Running 0 24m 192.168.237.70 master <none> <none>
pod/kube-controller-manager-master 1/1 Running 0 24m 192.168.237.70 master <none> <none>
pod/kube-flannel-ds-amd64-7vhjw 1/1 Running 0 20m 192.168.237.70 master <none> <none>
pod/kube-flannel-ds-amd64-nhpr4 1/1 Running 0 20m 192.168.237.90 node2 <none> <none>
pod/kube-flannel-ds-amd64-tjnrn 1/1 Running 0 20m 192.168.237.80 node1 <none> <none>
pod/kube-proxy-5pcdb 1/1 Running 0 22m 192.168.237.90 node2 <none> <none>
pod/kube-proxy-cxlt2 1/1 Running 0 22m 192.168.237.80 node1 <none> <none>
pod/kube-proxy-hbfxc 1/1 Running 0 25m 192.168.237.70 master <none> <none>
pod/kube-scheduler-master 1/1 Running 0 24m 192.168.237.70 master <none> <none>
pod/kubernetes-dashboard-859b87d4f7-n6d4w 1/1 Running 0 2m5s 10.244.2.5 node2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 25m k8s-app=kube-dns
service/kubernetes-dashboard NodePort 10.98.0.246 <none> 443:30001/TCP 2m5s k8s-app=kubernetes-dashboard
[root@master opt]#
2、使用火狐或者360浏览器访问
https://node1:30001/
https://192.168.0.102:30001/
#创建service account并绑定默认cluster-admin管理员集群角色
kubectl create sa dashboard-admin -n kube-system
#给用户绑定一下权限
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#查看用户
kubectl get secret -n kube-system | grepdashboard
#获取令牌密钥
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
[root@master opt]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-sh894
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 9432d230-d619-4c20-ad2e-ce976184a902
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2g4OTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTQzMmQyMzAtZDYxOS00YzIwLWFkMmUtY2U5NzYxODRhOTAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.SPnRfLYyRDyLKni5TJuiwLNds-kA30rr40eeW09oPRAUOKCek1ot0UHL-vc-f4eyVozw7YIuivJinngIMQ9WvuDVcKbfT2Qoh87Tjff01XN0nRlWlwj5ICqpQPHj6141rZgkWBNlmxYbpnu0TtKB71zNSwOJiCfOqSIr8dqBQFvjVTLmfHC_ghowyiyTN10FXFlaf_I-q9P9zyKwM47NCklURnI6wnDzCa77lURRVGJCT27ScBo3KfKt9yGuRs4FjgQbJyVtRU-CITdJrbTTXYB1lca-8PX5B_4SrjbPMJfMUdn9QCNvGgXOJPOQuVV1u8wmbw5qU8izc3b5LjgeIw
//复制token令牌直接登录网站
三 、安装Harbor私有仓库
#修改主机名
hostnamectl set-hostname hub.lsq.com
#所有节点加上主机名映射
echo '192.168.0.102 hub.kgc.com' >> /etc/hosts
#安装 docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
mkdir /etc/docker
cat > /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries": ["https://hub.kgc.com"]
}
EOF
systemctl start docker
systemctl enable docker
#所有 node 节点都修改 docker 配置文件,加上私有仓库配置
cat > /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries": ["https://hub.kgc.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
安装 Harbor
#上传 harbor-offline-installer-v1.2.2.tgz 和 docker-compose 文件到 /opt 目录
cd /opt
#docker-compose安装
curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#安装
chmod +x /usr/local/bin/docker-compose
#查看版本
docker-compose --version
tar zxvf harbor-offline-installer-v1.2.2.tgz
cd harbor/
sed -i "5c hostname = hub.kgc.com" harbor.cfg
sed -i "9c ui_url_protocol = https" harbor.cfg
sed -i "24c ssl_cert = /data/cert/server.crt" harbor.cfg
sed -i "25c ssl_cert_key = /data/cert/server.key" harbor.cfg
sed -i "59c harbor_admin_password = Harbor12345" harbor.cfg
#生成证书
mkdir -p /data/cert
cd /data/cert
#生成私钥
openssl genrsa -des3 -out server.key 2048
输入两遍密码:123456
#生成证书签名请求文件
openssl req -new -key server.key -out server.csr
123456
CN
BJ
BJ
KGC
KGC
hub.kgc.com
admin@kgc.com
openssl req -new -key server.key -out server.csr
输入私钥密码:123456
输入国家名:CN
输入省名:BJ
输入市名:BJ
输入组织名:KGC
输入机构名:KGC
输入域名:hub.kgc.com
输入管理员邮箱:admin@kgc.com
两次回车
其它全部直接回车
#备份私钥
cp server.key server.key.org
#清除私钥密码
openssl rsa -in server.key.org -out server.key
输入私钥密码:123456
#签名证书
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt
输入密码123456
chmod +x /data/cert/*
cd /opt/harbor/
./install.sh
浏览器访问:https://hub.kgc.com
用户名:admin
密码:Harbor12345
#在一个node节点上登录harbor
docker login -u admin -p Harbor12345 https://hub.kgc.com
#设置镜像
docker tag nginx:latest hub.kgc.com/library/nginx:v1
#上传镜像
docker push hub.kgc.com/library/nginx:v1
#在master节点上删除之前创建的nginx资源
kubectl delete deployment nginx
kubectl run nginx-deployment --image=hub.lsq.com/library/nginx:v1 --port=80 --replicas=3
kubectl expose deployment nginx-deployment --port=30000 --target-port=80
kubectl get svc,pods
yum install ipvsadm -y
ipvsadm -Ln
curl 10.96.222.161:30000
kubectl edit svc nginx-deployment
25 type: NodePort #把调度策略改成NodePort
kubectl get svc
#kubectl get svc浏览器访问:
192.168.237.70:31118
192.168.237.80:31118
192.168.237.90:31118
四、内核参数优化方案
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 #禁止使用 swap 空间,只有当系统内存不足(OOM)时才允许使用它
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963 #指定最大文件句柄数
fs.nr_open=52706963 #仅4.4以上版本支持
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF