kubeadmin来部署k8s与高可用

1,141 阅读2分钟

1.kubadmin部署k8s

实验环境:

master: 192.168.223.100

node01: 192.168.223.101

node02: 192.168.223.102

具体操作:

环境准备:

//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a

sed -ri 's/.*swap.*/#&/' /etc/fstab

#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

image.png

//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

image.png

//所有节点修改hosts文件
vim /etc/hosts

192.168.223.100 master01
192.168.223.101 node01
192.168.223.102 node02

image.png

//调整内核参数
cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

//生效参数
sysctl --system

image.png

image.png

部署操作:

1.所有节点安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

image.png

2.修改我们docker的配置文件

mkdir /etc/docker

vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://r5uulkvq.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}

使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

image.png

3.启动我们docker

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

image.png

4.所有节点安装kubeadm,kubelet和kubectl,定义kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

image.png

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

image.png

5.设置kubelet开机自启

systemctl enable --now kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启

6.在 master01 节点上设置集群初始化配置文件并修改这个文件

mkdir /opt/k8s
cd /opt/k8s
kubeadm config print init-defaults > kubeadm-config.yaml

vim kubeadm-config.yaml

image.png

image.png

image.png

7.配置文件修改好后进行初始化拉取镜像

kubeadm config images pull --config kubeadm-config.yaml

image.png

8.k8s节点初始化

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs 参数可以在后续执行加入节点时自动分发证书文件
#tee kubeadm-init.log 用以输出日志

image.png

9.设定kubectl

kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
kubectl get nodes

image.png

10.修改controller-manager和scheduler的配置文件

cd /etc/kubernetes/manifests

vim kube-controller-manager.yaml

image.png

image.png

vim kube-scheduler.yaml

image.png

11.重启服务

systemctl restart kubelet.service

kubectl get cs

image.png

12.我们现在去node01与node02节点输入我们刚才初始化得到的信息,将该节点加入到k8s集群中

kubeadm join 192.168.223.100:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:43e99155a338946abc7c7450a8178956e9e1a471c7fe7a783b07d9f37dfcaf5d 

image.png

到我们的master01节点上查看我们nodes节点
kubectl get nodes

image.png

13.我们现在安装flannel网络插件,将我们把拖入并解压

unzip flannel-v0.21.5.zip 

image.png

14.在我们的node01与node02节点上加载我们的镜像

docker load  -i flannel.tar

docker load  -i flannel-cni-plugin.tar

image.png

15.将原来的cni文件做成备份文件,建立新的cni,node01与node02上面操作一样

image.png

image.png

16.master01上面也要上传镜像

docker load  -i flannel.tar

docker load  -i flannel-cni-plugin.tar

image.png

image.png

17.上传yaml文件

kubectl apply -f kube-flannel.yml

kubectl get pods -A

image.png

image.png

18.使用命令检测

kubectl  get nodes

image.png

2.kubeadmin k8s集群的高可用

2.1 实验环境:

master01: 192.168.223.100

master02: 192.168.223.103

master03: 192.168.223.104

node01: 192.168.223.101

node02: 192.168.223.102

注意事项:

  • master节点cpu核心数要求大于2
  • 最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳
  • 学会一个版本的 高可用部署,其他版本操作都差不多
  • 宿主机尽量升级到CentOS 7.9
  • 内核kernel升级到 4.19+ 这种稳定的内核
  • 部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

2.2 环境准备

//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab


//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02

//所有节点修改hosts文件
vim /etc/hosts
192.168.223.100 master01
192.168.223.103 master02
192.168.223.104 master03
192.168.223.101 node01
192.168.223.102 node02


//所有节点时间同步
yum -y install ntpdate

ntpdate time2.aliyun.com

crontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com

image.png

//所有节点实现Linux的资源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

image.png

//所有节点升级内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

cd /opt/
yum localinstall -y kernel-ml*

#更改内核启动方式
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot

image.png

image.png

//调整内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

#生效参数
sysctl --system 

image.png

//加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

image.png

2.3 所有节点安装docker

1.安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

#或者安装这个docker版本不要让docker版本太高,太高的话可能跟kubeadmin产生冲突
yum install -y docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io docker-compose-plugin

image.png

2.创建配置文件

mkdir /etc/docker

vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://r5uulkvq.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}

使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

image.png

3.重启docker

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"

image.png

2.4 所有节点安装kubeadm,kubelet和kubectl

  1. 定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

image.png

2.配置Kubelet使用阿里云的pause镜像

cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF

image.png

3.开机自启kubelet

systemctl enable --now kubelet

image.png

2.5在所有master节点上部署haproxy与keepalived

yum -y install haproxy keepalived

image.png

2.修改配置文件

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local0 info
    log         127.0.0.1 local1 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000

frontend monitor-in
    bind *:33305
    mode http
    option httplog
    monitor-uri /monitor

frontend k8s-master
    bind *:6444
    mode tcp
    option tcplog
    default_backend k8s-master

backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    server k8s-master1 192.168.223.100:6443  check inter 10000 fall 2 rise 2 weight 1
    server k8s-master2 192.168.223.103:6443  check inter 10000 fall 2 rise 2 weight 1
    server k8s-master3 192.168.223.104:6443  check inter 10000 fall 2 rise 2 weight 1
EOF

image.png

3.修改keepalived的配置文件

cd /etc/keepalived/

vim keepalived.conf

image.png

4.编写一个防脑裂脚本

cd /etc/keepalived
vim check_haproxy.sh

#!/bin/bash
if ! killall -0 haproxy
then
    systemctl stop keepalived
fi

image.png

5.重启haproxy与keepalived

systemctl enable --now haproxy
systemctl enable --now keepalived

image.png

2.6部署k8s集群

1.在master01 节点上设置集群初始化配置文件,并修改

kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt
vim kubeadm-config.yaml

image.png

image.png

2.更新集群初始化配置文件

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

image.png

3.所有节点拉取镜像,拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像

for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; done

kubeadm config images pull --config /opt/new.yaml

image.png

image.png

4.在master01 节点进行初始化

kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log

image.png

5.将我们需要的初始化信息保存下来

image.png

6.master01 节点进行环境配置,配置 kubectl

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

image.png

7.修改controller-manager和scheduler配置文件

vim /etc/kubernetes/manifests/kube-scheduler.yaml 

image.png

vim /etc/kubernetes/manifests/kube-controller-manager.yaml

image.png

8.重启服务

systemctl restart kubelet

kubectl get cs

image.png

9.所有节点加入集群,master 节点加入集群

kubeadm join 192.168.223.200:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:409bc6c9d7deaaf75b8f6222cfe6dee4d166a186813d096421b897183ef1824d \
    --control-plane --certificate-key 62f7d6dff1a30608fbca49c7f0f46f7ce886bbd632e3c8af45aea641fc5b714b

kubeadm join 192.168.223.200:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:409bc6c9d7deaaf75b8f6222cfe6dee4d166a186813d096421b897183ef1824d

2.7 部署网络插件flannel

步骤跟之前一样这边不细写了

3.如何解决kubeadmin证书过期问题

1.我们先查看证书的有效期

cd  /etc/kubernetes/pki

kubeadm alpha certs check-expiration

image.png

2.我们先将这个证书进行备份

cd  /etc/kubernetes

cp -r pki/ pki.bak/

image.png

3.重新刷新生成证书

kubeadm alpha certs renew all --config=/opt/k8s/kubeadm-config.yaml

image.png

4.重启服务

systemctl restart kubelet.service 

image.png

5.我们将manifests中的文件全部移走,过个1-2分钟再移回来,刷新一下master的组件

cd  /etc/kubernetes

mv manifests/* /tmp/

#过个1-2分钟移回来

cd /tmp/
mv ./* /etc/kubernetes/manifests/

image.png