[CKA-Exercises]Cluster

297 阅读1分钟
原文链接: blog.kii.la

Cluster (11%) [集群 占比 11%].

Kubernetes.io > Documentation > Reference > Kubectl CLI > kubectl cheat sheet

Kubernetes.io > Documentation > Reference > setup tools Reference > Kubeadm > Kubeadm upgrade

Kubernetes.io > Documentation > Tasks > Administration with Kubeadm > upgrading Kubeadm Cluster from v1.13 to v1.14

Kubernetes.io > Documentation > Tasks > Administer a cluster > cluster management

Kubernetes.io > Documentation > Tasks > Administer a Cluster > Operating etcd cluster for Kubernetes etcd recovery >

etcd recovery github

Understand Kubernetes Cluster upgrade process [理解 Kubernetes 集群升级过程]

show 允许组合 Kubernetes 的二进制版本:

  • kube-apiserver - version X
  • controller-manager -version X-1
  • kube-scheduler - version X-1
  • kubelet - version X-2
  • kube-proxy -version X-2
  • kubectl -version X+1 > X-1

在任何时刻 X-2 都是受支持的.

Master Upgrade

1
2
3
4
5
$ apt-get upgrade -y kubeadm=1.12.0-00
$ kubectl upgrade plan 
$ kubectl upgrade apply v1.12.0
$ apt-get upgrade -y kubelet=1.12.0-00
$ systemctl restart kubelet

Node Upgrade

1
2
3
4
5
6
$ kubectl drain node01
$ apt-get upgrade -y kubeadm=1.12.0-00
$ apt-get upgrade -y kubelet=1.12.0-00
$ kubeadm upgrade node config --kubelet-version v1.12.0
$ systemctl restart kubelet 
$ kubectl uncordon node01

Facilitate OS upgrades [便捷的升级方式]

show



$ kubectl drain node01 --ignore-daemonsets

现在在 Node01 上应用补丁,一旦它重新出现. 让它成为再次可调度的.

如果您在节点上运行一个 Pod, 该节点不属于 replicaset 控制器的一部分, 那么您需要强制执行驱逐, 而该 Pod 将永远消失



$ kubectl drain node01 --ignore-daemonsets --force
$ kubectl uncordon node01

但如果你那只是想让节点不可调度,但又不想驱逐正在运行的 Pod, 那么就 cordon 节点.



$ kubectl cordon node01

Implement backup and restore methodologies [实现备份和恢复的方法]

show



$ kubectl get all --all-namespaces -o yaml > all-services.yaml

etcd 保存它的所有数据是在这个路径下



$ cat /etc/system.d/system/etcd.service
# --data-dir=/var/lib/etcd

$ ETCDCTL_API=3 etcdctl snapshot save snapshot.db \
    --endpoints=https://127.0.0.1:2379 \
    --cacert=/etc/etcd/ca.crt \
    --cert=/etc/etcd/etcd-server.crt \
    --key/etc/etcd/etcd-server.key

$ ETCDCTL_API=3 etcdctl snapshot status snapshot.db

$ service kube-spiserver stop
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
    --name=master \
    --cert=/etc/kubernetes/pki/etcd/server.crt \
    --key=/etc/kubernetes/pki/etcd/server.key \
    --data-dir /var/lib/etcd-from-backup \
    --initial-cluster master-1=https://127.0.0.1:2380 \
    initial-cluster-token etcd-cluster-1 \
    initial-advertise-peer-urls https://127.0.0.1:2380
    

$ cat /etc/system/system.d/etcd.service

# --initial-cluster-token etcd-cluster-1
# --data-dir /var/lib/etcd-from-backup

$ systemctl daemon-reload
$ service etcd restart
$ service kube-apiserver start