07k8s集群UI及主机资源监控

0 阅读4分钟

k8s集群UI及主机资源监控

使用 kubeadm部署单Master节点k8s1.21集群 这个进行验证

Kubernetes dashboard作用

  • 通过dashboard能够直观了解Kubernetes集群中运行的资源对象
  • 通过dashboard可以直接管理(创建、删除、重启等操作)资源对象

Kubernetes dashboard部署

# 获取Kubernetes dashboard资源清单文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
# 修改资源清单文件
vim recommended.yaml
...
上述内容不变

为了方便在容器主机上访问,下面的service需要添加NodePort类型及端口,30行
---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

此证书不注释,对于早期版本需要注释
---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---
...

中间内容不用改变

需要修改登录kubernetes dashboard后用户的身份,不然无法显示资源情况,155行
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin   一定要把原来的kubernetes-dashboard修改为cluster-admin,不然进入UI后会报错。
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

...

以下内容暂不修改

# 部署
kubectl apply -f recommended.yaml

# 等待pod状态变为Running
watch kubectl get pods -n kubernetes-dashboard

kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-sf657                kubernetes.io/service-account-token   3      4m59s
kubernetes-dashboard-certs         Opaque                                0      5m
kubernetes-dashboard-csrf          Opaque                                1      5m
kubernetes-dashboard-key-holder    Opaque                                2      5m
kubernetes-dashboard-token-m9dm8   kubernetes.io/service-account-token   3      5m

# 获取登录时的token
kubectl describe secret kubernetes-dashboard-token-m9dm8 -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-m9dm8
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 93250c37-ec10-4639-aac8-68c10def1c75

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkZtRUlCVXZEM3gxbmFlbFFRcG9zU0lrSXJxMlBEeFJEbkNDR2NMNzRuWGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1tOWRtOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjkzMjUwYzM3LWVjMTAtNDYzOS1hYWM4LTY4YzEwZGVmMWM3NSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.3vLAmnx0MBwa0tvgps4kEhCWsEMbNhltGqsvw1JGuHbdJw1Qu3sx1Yc_WDkeDqDAV1gkyJH5ELlLiWHWQtDpuVFeAWsqbAhWlWlFZ9ITzMdR_mxiIva0et_XOFkwRtbwP_AnalUlbkISh-vdHDdc9zsrjqamc-PLCZYxMQrEqjifnCCJnyauZ-6QUM2J2PAXXlY1O0jjTa8PYtlu9LBDSln1JG0f6h5FPOtPAuf7agEj1vPT1q5Mkh2MMUI42nQZtXTsjpI9TCanUPcdDTH403n1FFtx77dtYjJBaiS4glmlZMi08rnY7gaQnPd6_YpDHXVuKCscr-i2EKMjAKFb6g

image.png

image.png

使用metrics-server实现主机资源监控

wget  https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
# 修改metrics-server资源清单文件
vim components.yaml
...
    spec: 132行
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls 添加此行内容

kubectl top nodes
W1025 16:44:20.589369   43262 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
error: Metrics API not available

kubectl top pods
W1025 16:44:54.364681   43953 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
error: Metrics API not available

kubectl apply -f components.yaml

# 可能由于网络原因导致这里的pod起不来,则需要通过下面的方式手动下载镜像
kubectl get pods -n kube-system | grep metrics
metrics-server-8bb87844c-jjsth     0/1     ImagePullBackOff   0          18m

kubectl describe pod metrics-server-8bb87844c-jjsth -n kube-system
...
Node:                 worker01/192.168.91.171
...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  19m                   default-scheduler  Successfully assigned kube-system/metrics-server-8bb87844c-jjsth to worker01
  Warning  Failed     19m                   kubelet            Failed to pull image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": dial tcp 64.233.188.82:443: i/o timeout
...

# 通过上面信息可以看出在worker01节点,下载"k8s.gcr.io/metrics-server/metrics-server:v0.6.1"镜像失败
# worker01
docker pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
docker tag registry.aliyuncs.com/google_containers/metrics-server:v0.6.1 k8s.gcr.io/metrics-server/metrics-server:v0.6.1

# master01
kubectl get pods -o wide -n kube-system | grep metrics
metrics-server-8bb87844c-jjsth     1/1     Running   0          24m   10.244.5.1       worker01   <none>           <none>

kubectl top nodes
W1026 10:30:09.865550   56825 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master01   126m         3%     1738Mi          45%
worker01   45m          1%     829Mi           45%
worker02   40m          1%     744Mi           40%

kubectl top pods -A
W1026 10:31:40.178114   58724 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAMESPACE              NAME                                        CPU(cores)   MEMORY(bytes)
calico-apiserver       calico-apiserver-5878cc867d-llwfd           3m           42Mi
calico-apiserver       calico-apiserver-5878cc867d-xshbk           3m           45Mi
calico-system          calico-kube-controllers-988c95d46-bjw9p     1m           30Mi
calico-system          calico-node-2229b                           16m          216Mi
calico-system          calico-node-2fw9l                           15m          231Mi
calico-system          calico-node-wm7wq                           13m          223Mi
calico-system          calico-typha-8585796b6c-ggx9z               2m           29Mi
calico-system          calico-typha-8585796b6c-gplz2               1m           38Mi
kube-system            coredns-558bd4d5db-4s7v8                    2m           17Mi
kube-system            coredns-558bd4d5db-98thk                    2m           19Mi
kube-system            etcd-master01                               11m          59Mi
kube-system            kube-apiserver-master01                     32m          455Mi
kube-system            kube-controller-manager-master01            8m           63Mi
kube-system            kube-proxy-6pcrc                            1m           31Mi
kube-system            kube-proxy-c95ck                            1m           30Mi
kube-system            kube-proxy-qhw2v                            1m           26Mi
kube-system            kube-scheduler-master01                     2m           25Mi
kube-system            metrics-server-8bb87844c-jjsth              3m           19Mi
kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-zxsqb   1m           8Mi
kubernetes-dashboard   kubernetes-dashboard-79b5779bf4-sp98f       1m           44Mi
tigera-operator        tigera-operator-9f6fb5887-8rrh7             2m           48Mi

image.png