2022 年 CKA 考题 2022.06.31 刚过

958 阅读9分钟

四月份预约的考试,本来约的是6月25号,但开始考试的时候发现一直未能加载我要考试的科目(真的很崩溃!!!)。后来从客服那里了解到,预约系统处于维护调整中,这段时间不能进行考试。之后通过邮件的方式,让相关同事调整了后台数据,才得以重新预约考试,这次考试过程着实有些折腾。

1656658229960.jpg

本次考试共17道题,考试时间120分钟,考试时试题顺序可能会有些变化。可以提前半个小时,配合监考老师进行考试环境的检查,建议手腕不要带手表或装饰品,桌子或周围不要有书籍或纸质物品,祝要备考的朋友考试顺利!!!

01 RBAC

Create a new ClusterRole named deployment-clusterrole that only allows the creation of the following resource types:

  • Deployment
  • StatefulSet
  • DaemonSet

Create a new ServiceAccount named cicd-token in the existing namespace app-team1.

Limited to namespace app-team1, bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token.

解析:

  • 创建一个名称为 deployment-clusterrole 的 clusterrole,并只允许创建 deploymentstatefulsetdaemonset 资源
  • 在 app-team1 命名空间下创建一个名称为 cicd-token 的 serviceaccount 资源对象
  • 限定于 app-team1 名称空间,将上述创建两个资源对象进行绑定
#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deplyment,statefulset,daemonset
#--> app-team1 命名空间在集群中已存在
kubectl create sa -n app-team1 cicd-token
#--> 只需要限定在某命名空间,所以应该使用 rolebinding
kubectl create rolebinding -n app-team1 clusterrole-token --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token 
#--> 查看
kubectl describe rolebinding -n app-team1 clusterrole-token

02 Drain 节点

Set the node named ek8s-node-1 as unavaliable and reschedule all pods running on it.

解析:

  • 将名字为 ek8s-node-1 的节点设置为不可调度,同时驱逐运行在它上面的所有 Pods
#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-emptydir-data --force --ignore-daemonsets
#--> 查看:
kubectl get node

备注:

--force 不被控制器 (RC、RS、Job、DaemonSet 或 StatefulSet) 管理的pod也会被强制删除

--ignore-daemonsets 表示不会删除 daemonset 管理的 Pods

--delete-local-data 如果 pods 使用 emptyDir,也会继续被删除

03 K8s 升级

Given an existing Kubernetes cluster running version 1.22.1, upgrade all of Kubernetes control plane and node components on the master node only to version 1.22.2
You are also expected to upgrade kubelet and kubectl on the master node.

Be sure to drain the master node before upgrading it and uncordon it after the upgrade Do not upgrade the worker nodes, etcd, the container manager, the CNI plugin, the DNS service or any other addons

## 备注:要升级的集群中,只有一个master节点
#--> 切换到指定集群
kubectl config use-context [NAME]
#--> 查看当前集群的版本信息和要升级的节点名称
kubectl get node
#--> 登录到要升级的 master 节点
ssh master01
sudo su -
apt update && apt-cache madison kubeadm 
apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.22.2-00
#--> 检查 kubeadm 版本是否正确
kubeadm version
#--> 腾空控制面节点
kubectl drain master01 --force --delete-emptydir-data --ignore-daemonsets
#--> 查看可升级的版本信息
kubeadm upgrade plan 
#--> 控制面板升级,etcd 不进行升级
kubeadm upgrade apply v1.22.2  --etcd-upgrade=false 
#--> 升级各个控制面节点上的 kubelet 和 kubectl 组件
apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.22.2-00 kubectl=1.22.2-00
#--> 重启 kubelet
systemctl daemon-reload && systemctl restart kubelet
#--> 取消对控制面节点的保护
kubeadm uncordon master01 
#--> 查看集群版本信息
kubectl get node

04 ETCD

First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to /var/lib/backup/etcd-snapshot.db

Creating a snapshot of the given instance is expected to complete in seconds. If the operation seems to hang, something's likely wrong with your command. Use CTRL+C to cancel the operation and try again.

Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previous.db .

The following TLS certificates/key are supplied for connecting to the server with etcdctl: CA certificate: /opt/KUIN0061/ca.crt Client certificate: /opt/KUIN0061/etcd-client.crt Client key: /opt/KUIN0061/etcd-client.key

## 不需要进行集群的切换,etcdctl 工具主机上已存在,无需进行安装
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /var/lib/backup/etcd-snapshot.db 
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt  --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db 

备注:
etcdctl 是连接 etcd 的客户端工具
--endpoint:etcd 连接地址
--cacert:etcd 的 ca 证书
--key:etcd 客户端的 key 证书
--cert:etcd 客户端的 ca 证书

05 NetworkPolicy

在已有 namespace foobar 中创建一个名为 allow-port-from-namespace 的 networkpolicy,允许 namespace corp-bar 可以访问其 Pods 的 9200 端口 

#--> 切换到指定集群
kubectl config use-context [NAME]
#--> 查看 namespace corp-bar 的标签,如:kubernetes.io/metadata.name=corp-bar
kubectl get ns --show-labels
#--> vim 05.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: foobar
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: corp-bar
    ports:
    - protocol: TCP
      port: 9200
#--> :wq保存退出
kubectl apply -f 5.yaml
#--> 查看
kubectl describe networkpolicy -n foobar allow-port-from-namespace

06 SVC

Reconfigure the existing deployment front-end and add a port specifiction named http exposing port 80/tcp of the existing container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

解析:

  • 重新配置已经存在、名称为 front-end 的工作负载,为已经存在容器 nginx,添加一个名称为 http、暴露 80/tcp 的端口
  • 创建一个名为 front-end-svc 的 svc,暴露容器的端口http
  • 将其配置为 NodePort 类型的 svc
#--> 切换到指定集群
kubectl config use-context [NAME]
#--> 按要求编辑 deployment 添加 ports 端口属性信息
kubectl edit deployment front-end
apiVersion: apps/v1
kind: Deployment
metadata:
  name: front-end
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec: 
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
# vim 06.yaml
apiVersion: v1
kind: Service
metadata:
  name: front-end-svc
  namespace: default
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
 
kubectl apply -f 06.yaml
#--> 或直接通过命令行,比较建议通过这种方式
kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort
#--> 查看
kubectl get endpoints front-end-svc

07 Ingress

Create a new nginx Ingress resources as follows:

  • Name: ping
  • Namespace: ing-internal
  • Exposing service hi on path /hi using service port 5678

The avaliability of service hi can be checked using the following command,which should return hi: curl -kL /hi

#--> 切换到指定集群
kubectl config use-context [NAME]
#--> vim 07.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  namespace: ing-internal
  nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /hi
	pathType: Prefix
	backend:
          service:
            name: hi
            port:
	      number: 5678
# kubectl apply -f 07.yaml

08 Scale

Scale the deployment presentaion to 3 pods

#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl scale deployment presentation --replicas=3
#--> 查看
kubectl get deployment presentation

备注:
kubectl scale 用于扩容或缩容 deployment、rs、rc 或 job 的 pod 副本数

09 NodeSelector

Schedule a pod as follows:

  • name: nginx-kusc00401
  • Image: nginx
  • Node selector: disk=spinning
#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl run nginx-kusc00401 --image=nginx --dry-run -o yaml > 09.yaml
vim 09.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
spec:
  nodeSelector:
    disk: spinning
  containers:
  - image: nginx
    name: nginx
kubectl apply -f nginx-kusc.yaml
#--> 查看
kubectl get pod 

10 Ready Node

Check to see how many nodes are ready (not including nodes tained NoSchedule) and write the number to /opt/KUSC00402/kusc00402.txt

#--> 切换到指定集群
kubectl config use-context [NAME]
## Ready 状态的节点数减去 NoSchedule 状态的节点数
kubectl get node |grep -i ready 
kubectl describe node |grep -i 'taints'
echo '2' >> /opt/KUSC00402/kusc00402.txt

11 Pod

Create a pod named kucc8 with a single app container for each of the following images runing inside(there may be between 1 and 4 images specified): nginx+memcached

#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl run kucc8 --image=nginx --dry-run -o yaml > 11.yaml
vim 11.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kucc8
spec:
  containers:
  - image: nginx
    name: nginx
  - image: memcached
    name: memcached
kubectl apply -f 11.yaml
#--> 查看
kubectl get pod kucc8

12 PV

Create a persistent volume with name app-config, of capacity 1Gi and access mode ReadOnlyMany, the type of volume is hostPath and its location is /svc/app-config.

#--> 切换到指定集群
kubectl config use-context [NAME]
# vim 12.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity: 
    storage: 1Gi
  accessModes: 
  - ReadOnlyMany
  hostPath:
    path: "/srv/app-config"


kubectl apply -f 12.yaml
#--> 查看
kubectl get pv app-config

13 PVC

Create a new PersistentVolumeClaim:

  • Name: pv-volume
  • Class: csi-hostpath-sc
  • Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:

  • Name: web-server
  • Image: nginx
  • Mount Path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally,using kubectl edit or Kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change

#--> 切换到指定集群
kubectl config use-context [NAME]
# vim 13.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  accessModes: 
  - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
  - name: web-server
    image: nginx
    volumeMounts:
    - name: my-volume
      mountPath: "/usr/share/nginx/html"
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: pv-volume
        
kubectl create -f 13.yaml
#--> edit 修改容量 
kubectl edit pvc pv-volume --record
#--> 查看,注意edit完,需要稍等待一会,容量变才会变为为 70Mi
kubectl get pvc pv-volume

14 Logs

Monitor the logs of pod bar and:

  • Extract log lines corresponding to error unable-to-access-website
  • Write them to /opt/KUTR00101/bar
#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl logs bar|grep 'unable-to-access-website' >> /opt/KUTR00101/bar

15 Sidecar

Context
Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes's build-in logging architecture(e.g kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task

Add a busybox sidecar container to the existing Pod big-corp-app.The new sidecar container has to run the following command:

/bin/sh -c tail -n+1 -f /var/log/big-corp-app.log

Use a volume mount named logs to make the file /var/log/big-corp-app.log available to the sidecar container.

Don't modify the existing container. Don't modify the path of the log file, both containers must access it at /var/log/bin-corp-app.log

#--> 切换到指定集群
kubectl config use-context [NAME]
# 最初 Pod yaml 内容大致如下
# kubectl get pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  name: big-corp-app
spec:
  containers:
  - name: big-corp-app
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
        i=$((i+1));
        sleep 1;
      done  

进行如下操作:

# 更改 
# kubectl get pod big-corp-app -o yaml >15.yaml
# kubectl delete -f 15.yaml
# vim 15.yaml 添加 sidecar 容器,配置 volume
apiVersion: v1
kind: Pod
metadata:
  name: big-corp-app
spec:
  containers:
  - name: big-corp-app
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
        i=$((i+1));
        sleep 1;
      done      
    volumeMounts:
    - name: logs
      mountPath: /var/log
  - name: count-log
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
    emptyDir: {}
    
kubectl apply -f 15.yaml
kubectl get pod big-corp-app 
#--> 查看
kubectl logs big-corp-app -c count-log

16 CPU

From the pod label name=cpu-loader,find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KURT00401.txt (while alreay exists).

#--> 切换到指定集群
kubectl config use-context [NAME]
kubectl top pod -A -l name=cpu-loader --sort-by=cpu 
echo "[pod_name]" >> /opt/KUTR00401/KURT00401.txt

17 kubelet

A kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform an appropriate steps to bring the node to a Ready stat, ensuring that any changes are made permanent

You can ssh to the failed node using:
ssh wk8s-node-0
You can assume elevated privileges on the node with the following command: sudo -i

#--> 切换到指定集群
kubectl config use-context [NAME]
# 查看哪个节点 notready
ssh wk8s-node-0
sudo -i
# 查看 kubelet 服务是否正常运行
systemctl status kubelet
# 如果状态不是running,启动 kubelet 服务
systemctl restart kubelet
# 设置 kubelet 服务开机启动
systemctl enable kubelet
#--> 检查
kubectl get node

最后,觉得有帮助的小伙伴欢迎留言或微信打赏~~

收款.png