K8S的多集群管理

503 阅读2分钟

1. 前提

在实际生产环境中,往往需要维护多个k8s集群,在多个环境和节点之间切换,影响工作效率,不符合devops的理念,因此作者尝试在单个节点下面维护多个k8s集群。

2. 要求

  • 了解k8s的context
  • 了解k8s的kubeconfig
  • 至少2个以上k8s集群

3. 实验

3.1 k8s集群

  • 节点t34集群
[root@t34 ~]# kubectl  get nodes 
NAME   STATUS   ROLES                      AGE    VERSION
t31    Ready    worker                     156d   v1.14.3
t32    Ready    worker                     70d    v1.14.3
t34    Ready    controlplane,etcd,worker   199d   v1.14.3
t90    Ready    worker                     156d   v1.14.3
t91    Ready    worker                     169d   v1.14.3
  • 节点node43集群
[root@node43 ~]# kubectl  get nodes 
NAME     STATUS   ROLES                      AGE    VERSION
node43   Ready    controlplane,etcd,worker   121d   v1.14.3

3.2 kubeconfig文件

查看kubeconfig文件可以使用kubectl命令,也可以直接查看/root/.kube/config(默认位置)

  • node43集群
[root@node43 ~]# kubectl  config view 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.5.43/k8s/clusters/c-mg6wm
  name: test
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.5.43:6443
  name: test-node43
contexts:
- context:
    cluster: test
    user: user-twwt4
  name: test
- context:
    cluster: test-node43
    user: user-twwt4
  name: test-node43
current-context: test
kind: Config
preferences: {}
users:
- name: user-twwt4
  user:
    token: kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdp
  • t34集群
[root@t34 canary]# kubectl config view 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.4.34/k8s/clusters/c-6qgsl
  name: test
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.4.34:6443
  name: test-t34
contexts:
- context:
    cluster: test
    user: user-czbv6
  name: test
- context:
    cluster: test-t34
    user: user-czbv6
  name: test-t34
current-context: test
kind: Config
preferences: {}
users:
- name: user-czbv6
  user:
    token: kubeconfig-user-czbv6.c-6qgsl:tznvpqkdw7mz6r8276h8zs5hbl45h2bv2g8jwfjqc8qckhgfwwz9rd

3.3 配置

在t34上面配置node43的cluster,user以及context

  • 添加cluster
[root@t34 canary]# kubectl config set-cluster node43 --server=https://192.168.5.43:6443 --insecure-skip-tls-verify=true
Cluster "node43" set.
  • 添加user
[root@t34 canary]# kubectl config set-credentials node43-user --token=kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdp
User "node43-user" set.
  • 添加context
[root@t34 canary]# kubectl config set-context node43-context --cluster=node43 --user=node43-user
Context "node43-context" created.
  • 查看
[root@t34 canary]# kubectl config view 
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://192.168.5.43:6443
  name: node43
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.4.34/k8s/clusters/c-6qgsl
  name: test
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.4.34:6443
  name: test-t34
contexts:
- context:
    cluster: node43
    user: node43-user
  name: node43-context
- context:
    cluster: test
    user: user-czbv6
  name: test
- context:
    cluster: test-t34
    user: user-czbv6
  name: test-t34
current-context: test
kind: Config
preferences: {}
users:
- name: node43-user
  user:
    token: kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdp
- name: user-czbv6
  user:
    token: kubeconfig-user-czbv6.c-6qgsl:tznvpqkdw7mz6r8276h8zs5hbl45h2bv2g8jwfjqc8qckhgfwwz9rd

3.4 测试

当前context为test,cluster对应的test(即t34集群),user为user-czbv6

[root@t34 canary]# kubectl config current-context
test

[root@t34 canary]# kubectl get nodes
NAME   STATUS   ROLES                      AGE    VERSION
t31    Ready    worker                     156d   v1.14.3
t32    Ready    worker                     70d    v1.14.3
t34    Ready    controlplane,etcd,worker   199d   v1.14.3
t90    Ready    worker                     156d   v1.14.3
t91    Ready    worker                     169d   v1.14.3

切换context为node43-context,cluster对应的node43(即node43集群),user为node43-user

[root@t34 canary]# kubectl config use-context node43-context 
Switched to context "node43-context".

[root@t34 canary]# kubectl config  current-context
node43-context
[root@t34 canary]# kubectl get nodes
NAME     STATUS   ROLES                      AGE    VERSION
node43   Ready    controlplane,etcd,worker   121d   v1.14.3

至此,在t34节点上维护了两个k8s集群,按照同样的办法可以添加更多的k8s集群,只是通过不同的context进行切换。

btw:在同一集群下,利用context可以完成生产环境和开发环境的分离