手动升级K8s
1.K8s版本升级与新版本集群部署
方案:
- 方法1: 先升级测试环境,测试运行pod没有问题再升级生产环境
- 方法2: 重新部署一套新版本的K8s,再迁移之前K8s集群业务到新版本环境,测试通过以后在将负载均衡的后端服务器指向新版本K8s环境
下载插件网站:github.com/kubernetes/…
注意要下amd64
kubernetes
kubernetes-client
kubernetes-server
kubernetes-node
101Master
上传
cd /usr/local/src/
mkdir 1.14.7
cd 1.14.7
#下载
wget https://dl.k8s.io/v1.14.7/kubernetes.tar.gz
wget https://dl.k8s.io/v1.14.7/kubernetes-client-darwin-amd64.tar.gz
wget https://dl.k8s.io/v1.14.7/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.7/kubernetes-node-linux-amd64.tar.gz
#解压
tar xf kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-client-darwin-amd64.tar.gz
tar xf kubernetes-node-linux-amd64.tar.gz
tar xf kubernetes.tar.gz
先删除node节点
# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.37.101 Ready,SchedulingDisabled master 3d11h v1.13.5
192.168.37.102 Ready,SchedulingDisabled master 3d10h v1.13.5
192.168.37.110 Ready node 3d10h v1.13.5
192.168.37.111 Ready node 3d10h v1.13.5
# kubectl delete node 192.168.37.102
102Master
手动升级、注意正在运行的服务是不能升级的、需要先停止服务
systemctl stop kubelet kube-proxy
101Master
拷贝
cd kubernetes/server/bin/
scp kubelet kube-proxy 192.168.37.102:/usr/bin
102Master
启动服务
systemctl start kubelet kube-proxy
发现版本会发生改变此处如果报错、或版本没有发生改变【请查看docker version版本是否兼容、sawp是否关闭、防火墙是否关闭、SELinux是否关闭、是否同步时间】
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.37.101 Ready,SchedulingDisabled master 3d12h v1.13.5
192.168.37.102 Ready <none> 52m v1.14.7
192.168.37.110 Ready node 3d12h v1.13.5
192.168.37.111 Ready node 3d11h v1.13.5
看着都是v1.14.7版本、但是master版本还没有升级、还有替换master二进制文件
systemctl stop kube-apiserver kube-controller-manager kube-scheduler
101Master
scp kube-apiserver kube-controller-manager kube-scheduler 192.168.37.102:/usr/bin
102Master
启动服务后、102升级完成
systemctl start kube-apiserver kube-controller-manager kube-scheduler
101Master
升级过程
kubectl delete node 192.168.37.101
kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.37.102 Ready <none> 4m9s v1.14.7
192.168.37.110 Ready node 8h v1.13.5
192.168.37.111 Ready node 8h v1.13.5
systemctl stop kubelet kube-proxy kube-apiserver kube-controller-manager kube-scheduler
cd /usr/local/src/1.14.7/kubernetes/server/bin
scp kubelet kube-proxy kube-apiserver kube-controller-manager kube-scheduler 192.168.37.101:/usr/bin
systemctl start kubelet kube-proxy kube-apiserver kube-controller-manager kube-scheduler
#升级完成
kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.37.101 Ready <none> 6s v1.14.7
192.168.37.102 Ready <none> 11m v1.14.7
192.168.37.110 Ready node 8h v1.13.5
192.168.37.111 Ready node 8h v1.13.5
110/111(依次升级、此处以110为例)
升级过程
在master1操作
先删除
kubectl delete node 192.168.37.110
110/111
停服务
systemctl stop kubelet kube-proxy
在master1
拷贝文件
scp kubelet kube-proxy 192.168.37.110:/usr/bin
110/111
启动服务
systemctl start kubelet kube-proxy
49.23 docker18.09.9
升级K8s
如升级到2.0.3版本:github.com/easzlab/kub…
101Master
克隆服务
git clone -b 2.0.3 https://github.com/easzlab/kubeasz.git
Master{101\102}-etcd{105\106\107}-ha{108\109}-node{110\111}
每个节点安装依赖工具
安装python2.7
apt-get install python2.7 -y
软链接
ln -s /usr/bin/python2.7 /usr/bin/python
101Master
#清空目录
rm -rf /etc/ansible/*
#移动文件
mv kubeasz/* /etc/ansible/
免密钥认证
vim /root/scp.sh
#!/bin/bash
#目标主机列表
IP="
192.168.37.101
192.168.37.102
192.168.37.105
192.168.37.106
192.168.37.107
192.168.37.108
192.168.37.109
192.168.37.110
192.168.37.111
"
#安装命令
apt install sshpass -y
for node in ${IP};do
#拷贝密钥、注意密码要一致 如:'123.com'
sshpass -p 123.com ssh-copy-id ${node} -o StrictHostKeyChecking=no
if [ $? -eq 0 ];then
echo "${node} 密钥copy完成"
else
echo "${node} 密钥copy失败"
fi
done
在ansible控制端编排k8s安装
推荐使用 easzup 脚本下载 4.0/4.1/4.2 所需文件;运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录`/etc/ansilbe`
# 下载工具脚本easzup,举例使用kubeasz版本2.0.3
export release=2.0.3
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup
# 添加权限
chmod +x ./easzup
# 使用工具脚本下载
./easzup -D
下载完成后、查看镜像
docker images
配置集群参数
# 必要配置
cd /etc/ansible
cp example/hosts.multi-node ./hosts
# 编辑配置文件
vim hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
192.168.37.105 NODE_NAME=etcd1
192.168.37.106 NODE_NAME=etcd2
192.168.37.107 NODE_NAME=etcd3
# master node(s)
[kube-master]
192.168.37.101
192.168.37.102
# work node(s)
[kube-node]
192.168.37.110
192.168.37.111
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
[harbor]
#192.168.37.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no
# [optional] loadbalance for accessing k8s from outside
[ex-lb]
192.168.37.108 LB_ROLE=backup EX_APISERVER_VIP=192.168.37.240 EX_APISERVER_PORT=6443
192.168.37.109 LB_ROLE=master EX_APISERVER_VIP=192.168.37.240 EX_APISERVER_PORT=6443
# [optional] ntp server for the cluster
[chrony]
#192.168.37.1
[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.20.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-65000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="linux01.local."
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
[01-创建证书和安装准备](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/01-CA_and_prerequisite.md)
# 注释掉如下信息
vim 01.prepare.yml
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube-master
- kube-node
- etcd
- ex-lb <--
# - chrony <--
验证ansible 安装: 正常能看到节点返回 SUCCESS
ansible all -m ping
安装
ansible-playbook 01.prepare.yml
[02-安装etcd集群](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/02-install_etcd.md)
查看版本等信息
pwd
/etc/ansible
./bin/etcd --version
etcd Version: 3.3.10
Git SHA: 27fc7e2
Go Version: go1.10.4
Go OS/Arch: linux/amd64
<!---->
ansible-playbook 02.etcd.yml
停服务、替换docker版本
systemctl stop docker
cd down/
tar xvf docker-18.09.9.tgz
cp docker/* /usr/bin/
启动服务\查看docker版本
systemctl start docker
# docker version
Client: Docker Engine - Community
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:50:02 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:55:50 2019
OS/Arch: linux/amd64
Experimental: false
[03-安装docker服务](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/03-install_docker.md)
cd /etc/ansible/
ansible-playbook 03.docker.yml
在 **node{110\111}** 中验证
# docker version
Client: Docker Engine - Community
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:33:34 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:41:08 2019
OS/Arch: linux/amd64
Experimental: false
[04-安装master节点](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/04-install_kube_master.md)
ansible-playbook 04.kube-master.yml
[05-安装node节点](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/05-install_kube_node.md)
ansible-playbook 05.kube-node.yml
<!---->
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.37.101 Ready,SchedulingDisabled master 6m35s v1.15.2
192.168.37.102 Ready,SchedulingDisabled master 6m35s v1.15.2
192.168.37.110 Ready node 3m32s v1.15.2
192.168.37.111 Ready node 3m32s v1.15.2
[06-安装集群网络](https://github.com/easzlab/kubeasz/blob/2.0.3/docs/setup/06-install_network_plugin.md)
# calicoctl node status
Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.37.102 | node-to-node mesh | up | 19:52:03 | Established |
| 192.168.37.110 | node-to-node mesh | up | 19:52:03 | Established |
| 192.168.37.111 | node-to-node mesh | up | 19:52:03 | Established |
+----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
创建pod测试、能否跨主机通信
# kubectl run net-test --image=alpine --replicas=2 sleep 360000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/net-test created
# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test-54ddf4f6c7-5hmvh 0/1 ContainerCreating 0 9s
net-test-54ddf4f6c7-wwhf2 0/1 ContainerCreating 0 9s
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test-54ddf4f6c7-5hmvh 1/1 Running 0 17s 172.20.104.1 192.168.37.111 <none> <none>
net-test-54ddf4f6c7-wwhf2 1/1 Running 0 17s 172.20.166.129 192.168.37.110 <none> <none>
# 进到其中一个容器中
# kubectl exec -it net-test-54ddf4f6c7-wwhf2 sh
#ping对方进行测试
/ # ping 172.20.104.1 -c 2
PING 172.20.104.1 (172.20.104.1): 56 data bytes
64 bytes from 172.20.104.1: seq=0 ttl=62 time=0.479 ms
64 bytes from 172.20.104.1: seq=1 ttl=62 time=0.470 ms
--- 172.20.104.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.470/0.474/0.479 ms