4-Kubernetes-基于Centos7安装面板及监控(四)

132 阅读2分钟

4-Kubernetes-基于Centos7安装面板及监控(四)

环境准备

  1. 准备三台虚拟机,每台虚机请参照Kubernetes基于Centos7构建基础环境(一)、Kubernetes基于Centos7构建基础环境(二)、Kubernetes基于Centos7构建基础环境(三)进行安装构建

  2. 安装Git

一、 三台虚拟机配置表

  1. 服务器配置
服务器IP域名别名服务器类别登录用户登录密码CPU内存
192.168.1.55master55.xincan.cnmaster55masterrootroot2核4G
192.168.1.56slave56.xincan.cnslave56slaverootroot4核8G
192.168.1.57slave57.xincan.cnslave57slaverootroot4核8G
  1. 工具版本

一、安装Git

  1. 安装git所需要的安装包
  2. 切换到/usr/local/src/
[root@master55 /]# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel gcc perl-ExtUtils-MakeMaker
[root@master55 /]# cd /usr/local/src/

  1. 下载git
    • 版本为: git-2.23.0.tar.xz
    • 解压并切换到解压当前目录下
[root@master55 src]# wget https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.23.0.tar.xz
[root@master55 src]# tar -xvf git-2.23.0.tar.xz && cd git-2.23.0/
[root@master55 src]# 
  1. 开始安装
[root@master55 src]# make prefix=/usr/local/git all
[root@master55 src]# make prefix=/usr/local/git install
[root@master55 src]# echo "export PATH=$PATH:/usr/local/git/bin" >> /etc/profile
[root@master55 src]# source  /etc/profile

二、kube-promethues部署

  1. 下载安装源码,切换到第三节创建的k8s目录下,
  2. 进行git clone github.com/coreos/kube…拉取
  3. 安装文件都在kube-prometheus/manifests/ 目录下,切换到此目录
[root@master55 k8s]# git clone https://github.com/coreos/kube-prometheus.git
Cloning into 'kube-prometheus'...
remote: Enumerating objects: 8381, done.
remote: Total 8381 (delta 0), reused 0 (delta 0), pack-reused 8381
Receiving objects: 100% (8381/8381), 4.70 MiB | 421.00 KiB/s, done.
Resolving deltas: 100% (5082/5082), done.
[root@master55 k8s]# ls
calico  kube-prometheus


# 切换到kube-prometheus/manifests/下
[root@master55 k8s]# cd kube-prometheus/manifests/
[root@master55 manifests]# ls
alertmanager-alertmanager.yaml              node-exporter-clusterRoleBinding.yaml                       prometheus-clusterRole.yaml
alertmanager-secret.yaml                    node-exporter-clusterRole.yaml                              prometheus-operator-serviceMonitor.yaml
alertmanager-serviceAccount.yaml            node-exporter-daemonset.yaml                                prometheus-prometheus.yaml
alertmanager-serviceMonitor.yaml            node-exporter-serviceAccount.yaml                           prometheus-roleBindingConfig.yaml
alertmanager-service.yaml                   node-exporter-serviceMonitor.yaml                           prometheus-roleBindingSpecificNamespaces.yaml
grafana-dashboardDatasources.yaml           node-exporter-service.yaml                                  prometheus-roleConfig.yaml
grafana-dashboardDefinitions.yaml           prometheus-adapter-apiService.yaml                          prometheus-roleSpecificNamespaces.yaml
grafana-dashboardSources.yaml               prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-rules.yaml
grafana-deployment.yaml                     prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-serviceAccount.yaml
grafana-serviceAccount.yaml                 prometheus-adapter-clusterRoleBinding.yaml                  prometheus-serviceMonitorApiserver.yaml
grafana-serviceMonitor.yaml                 prometheus-adapter-clusterRoleServerResources.yaml          prometheus-serviceMonitorCoreDNS.yaml
grafana-service.yaml                        prometheus-adapter-clusterRole.yaml                         prometheus-serviceMonitorKubeControllerManager.yaml
kube-state-metrics-clusterRoleBinding.yaml  prometheus-adapter-configMap.yaml                           prometheus-serviceMonitorKubelet.yaml
kube-state-metrics-clusterRole.yaml         prometheus-adapter-deployment.yaml                          prometheus-serviceMonitorKubeScheduler.yaml
kube-state-metrics-deployment.yaml          prometheus-adapter-roleBindingAuthReader.yaml               prometheus-serviceMonitor.yaml
kube-state-metrics-serviceAccount.yaml      prometheus-adapter-serviceAccount.yaml                      prometheus-service.yaml
kube-state-metrics-serviceMonitor.yaml      prometheus-adapter-service.yaml                             setup
kube-state-metrics-service.yaml             prometheus-clusterRoleBinding.yaml
[root@master55 manifests]#
  1. 官方把所有文件都放在一起,这里我复制了然后分类下,切换到k8s目录下进行操作
[root@master55 k8s]# mkdir prometheus
[root@master55 k8s]# cp kube-prometheus/manifests/* prometheus/
[root@master55 k8s]# cd prometheus/
[root@master55 prometheus]# mkdir -p operator node-exporter alertmanager grafana kube-state-metrics prometheus serviceMonitor adapter
  1. kube-prometheus/manifests/setup下的文件都复制到/prometheus/operator/
[root@master55 k8s]# cp -f kube-prometheus/manifests/setup/* /k8s/prometheus/operator/
  1. prometheus/operator/0namespace-namespace.yaml文件移动到/k8s/prometheus/
[root@master55 k8s]# mv prometheus/operator/0namespace-namespace.yaml prometheus/
  1. 切换到prometheus目录下,进行文件归置
[root@master55 k8s]# cd prometheus/
[root@master55 prometheus]# mv *-serviceMonitor* serviceMonitor/
[root@master55 prometheus]# mv grafana-* grafana/
[root@master55 prometheus]# mv kube-state-metrics-* kube-state-metrics/
[root@master55 prometheus]# mv alertmanager-* alertmanager/
[root@master55 prometheus]# mv node-exporter-* node-exporter/
[root@master55 prometheus]# mv prometheus-adapter* adapter/
[root@master55 prometheus]# mv prometheus-* prometheus/
  1. 注意:新版本的默认label变了,需要修改选择器为beta.kubernetes.io/os,不然安装的时候会卡住,修改选择器
[root@master55 prometheus]# sed -ri '/linux/s#kubernetes.io#beta.&#' \
     alertmanager/alertmanager-alertmanager.yaml \
     prometheus/prometheus-prometheus.yaml \
     node-exporter/node-exporter-daemonset.yaml \
     kube-state-metrics/kube-state-metrics-deployment.yaml
[root@master55 prometheus]#
  1. 注意:镜像使用dockerhub上的
[root@master55 prometheus]# sed -ri '/quay.io/s#quay.io/prometheus#prom#' \
  alertmanager/alertmanager-alertmanager.yaml \
  prometheus/prometheus-prometheus.yaml \
  node-exporter/node-exporter-daemonset.yaml
[root@master55 prometheus]#
  1. 注意:镜像使用dockerhub上的
[root@master55 prometheus]# find -type f -exec sed -ri 's#k8s.gcr.io#gcr.azk8s.cn/google_containers#' {} \; 
[root@master55 prometheus]#
  1. 生成namespace
[root@master55 prometheus]# kubectl apply -f .
namespace/monitoring created
[root@master55 prometheus]#
  1. 安装operator
[root@master55 prometheus]# kubectl apply -f operator/
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
[root@master55 prometheus]#
  1. 依次安装其他组件
[root@master55 prometheus]# kubectl apply -f adapter/
[root@master55 prometheus]# kubectl apply -f alertmanager/
[root@master55 prometheus]# kubectl apply -f node-exporter/
[root@master55 prometheus]# kubectl apply -f kube-state-metrics/
[root@master55 prometheus]# kubectl apply -f grafana/
[root@master55 prometheus]# kubectl apply -f prometheus/
[root@master55 prometheus]# kubectl apply -f serviceMonitor/
[root@master55 prometheus]#
  1. 查看整体状态
[root@master55 /]# kubectl -n monitoring get all


# Pod查看
NAME                                       READY   STATUS    RESTARTS   AGE
pod/alertmanager-main-0                    2/2     Running   0          28m
pod/alertmanager-main-1                    2/2     Running   0          28m
pod/alertmanager-main-2                    2/2     Running   0          28m
pod/grafana-5c55845445-hzhbk               1/1     Running   0          38m
pod/kube-state-metrics-665c856fb9-7ggrg    3/3     Running   0          38m
pod/node-exporter-dqfd7                    2/2     Running   0          39m
pod/node-exporter-gf8gr                    2/2     Running   0          39m
pod/node-exporter-mcl79                    2/2     Running   0          39m
pod/prometheus-adapter-5cdcdf9c8d-665fj    1/1     Running   0          39m
pod/prometheus-operator-6f98f66b89-6spkk   2/2     Running   0          32m


# Service查看
NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-main       ClusterIP   10.111.117.158   <none>        9093/TCP                     39m
service/alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   28m
service/grafana                 ClusterIP   10.111.25.63     <none>        3000/TCP                     38m
service/kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            38m
service/node-exporter           ClusterIP   None             <none>        9100/TCP                     39m
service/prometheus-adapter      ClusterIP   10.109.99.195    <none>        443/TCP                      39m
service/prometheus-k8s          ClusterIP   10.110.243.126   <none>        9090/TCP                     38m
service/prometheus-operator     ClusterIP   None             <none>        8443/TCP                     32m


# DaemonSet查看
NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/node-exporter   3         3         3       3            3           beta.kubernetes.io/os=linux   39m


# Deployment查看
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana               1/1     1            1           38m
deployment.apps/kube-state-metrics    1/1     1            1           38m
deployment.apps/prometheus-adapter    1/1     1            1           39m
deployment.apps/prometheus-operator   1/1     1            1           32m


# Replicaset查看
NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-5c55845445               1         1         1       38m
replicaset.apps/kube-state-metrics-665c856fb9    1         1         1       38m
replicaset.apps/prometheus-adapter-5cdcdf9c8d    1         1         1       39m
replicaset.apps/prometheus-operator-6f98f66b89   1         1         1       32m


# StatefulSet查看
NAME                                 READY   AGE
statefulset.apps/alertmanager-main   3/3     28m
[root@master55 /]#
  1. Kubernetes查看所有svc
[root@master55 prometheus]# kubectl -n monitoring get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.111.117.158   <none>        9093/TCP                     115m
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   104m
grafana                 NodePort    10.111.25.63     <none>        3000:31533/TCP               114m
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            115m
node-exporter           ClusterIP   None             <none>        9100/TCP                     115m
prometheus-adapter      ClusterIP   10.109.99.195    <none>        443/TCP                      115m
prometheus-k8s          ClusterIP   10.110.243.126   <none>        9090/TCP                     114m
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     108m
[root@master55 prometheus]#
  1. Kubernetes暴露grafana外网访问端口
    • 修改type: ClusterIPtype: NodePort ,默认外网端口NodePort对应的是31533,找到如下代码,
[root@master55 prometheus]# kubectl -n monitoring edit svc grafana

spec:
  clusterIP: 10.111.25.63
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31533
    port: 3000
    protocol: TCP
    targetPort: http
  selector:
    app: grafana
  sessionAffinity: None
  type: NodePort

  1. 最终效果图(部分效果图)

image-20220214162041291 image-20220214162022595 image-20220214161956990

三、 安装kubernetes-dashboard

  1. 下载dashboard所需资源
    • git clone github.com/xincan/kube…
    • k8s下完整目录如下
    • calico存放网络插件文件编排
    • kube-prometheus存放pormetheus相关文件编排
    • kubernetes-dashboard存放Kubernetes面板文件编排
    • prometheus存放整理后监控相关文件编排
[root@xincan /]#cd /k8s/
[root@xincan k8s]#git clone https://github.com/xincan/kubernetes.git
[root@xincan k8s]#ls
calico  kube-prometheus  kubernetes-dashboard  prometheus
  1. 安装Dashboard
[root@master55 k8s]# cd kubernetes-dashboard/
[root@master55 kubernetes-dashboard]# ls
login-token  recommended.yaml
[root@master55 kubernetes-dashboard]#kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master55 kubernetes-dashboard]#
  1. 查看kubernetes-dashboard命名空间下pod,svc
[root@master55 kubernetes-dashboard]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS              RESTARTS   AGE
pod/dashboard-metrics-scraper-779f5454cb-hzgc4   1/1     ContainerCreating   0          30s
pod/kubernetes-dashboard-857bb4c778-gsf2q        1/1     ContainerCreating   0          30s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.105.54.169   <none>        8000/TCP   30s
service/kubernetes-dashboard        ClusterIP   10.110.40.170   <none>        443/TCP    31s
[root@master55 kubernetes-dashboard]#
  1. 增加nodePort: 30001,设置dashboard访问端口,修改type: NodePort,保存退出自动更新
    • 也可以使用kubectl patch svc -n kube-system kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'进行修改
[root@master55 kubernetes-dashboard]#kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
spec:
  clusterIP: 10.110.40.170
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30000
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort


  1. 修改完成后,可以看到kubernetes-dashboardtypeNodePort,端口为30000
[root@master55 kubernetes-dashboard]# kubectl get svc -n kubernetes-dashboard kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.110.40.170   <none>        443:30001/TCP   14m
[root@master55 kubernetes-dashboard]#

四、 Token令牌登录

  1. kube-system命名空间下创建xincan-dashboard-admin用户
[root@master55 k8s]# kubectl create serviceaccount xincan-dashboard-admin -n kube-system
serviceaccount/xincan-dashboard-admin created
  1. 查看用户
[root@master55 k8s]# kubectl get serviceaccount xincan-dashboard-admin -n kube-system
NAME                     SECRETS   AGE
xincan-dashboard-admin   1         7s
[root@master55 k8s]#
  1. kube-system命名空间下,创建xincan-dashboard-admin权限
[root@master55 k8s]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:xincan-dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created
  1. 查看权限xincan-dashboard-admin是否生效
[root@master55 k8s]# kubectl get secret -n kube-system
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-fk6c9              kubernetes.io/service-account-token   3      91m
bootstrap-signer-token-kjw62                     kubernetes.io/service-account-token   3      91m
bootstrap-token-90s06v                           bootstrap.kubernetes.io/token         7      91m
calico-kube-controllers-token-6824q              kubernetes.io/service-account-token   3      85m
calico-node-token-22rkl                          kubernetes.io/service-account-token   3      85m
certificate-controller-token-4zhbc               kubernetes.io/service-account-token   3      91m
clusterrole-aggregation-controller-token-vzbqs   kubernetes.io/service-account-token   3      91m
coredns-token-xrqs4                              kubernetes.io/service-account-token   3      91m
xincan-dashboard-admin-token-jkhl2               kubernetes.io/service-account-token   3      3m35s
[root@master55 k8s]#
  1. 获取xincan-dashboard-admin-token-jkhl2token令牌
    • 令牌如下
    • 复制令牌然后到登录界面进行粘贴
[root@master55 k8s]# kubectl describe secret -n kube-system xincan-dashboard-admin-token-jkhl2
Name:         xincan-dashboard-admin-token-jkhl2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: xincan-dashboard-admin
              kubernetes.io/service-account.uid: c257c2f8-57cf-41d7-b8e6-b833a8ef0790

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkFfZDQ5YTBkcTIwVG1xdF9rWFJxWDJfblFMd1lfdWQwdllVVjFxZTVtcTQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJ4aW5jYW4tZGFzaGJvYXJkLWFkbWluLXRva2VuLWpraGwyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InhpbmNhbi1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMjU3YzJmOC01N2NmLTQxZDctYjhlNi1iODMzYThlZjA3OTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06eGluY2FuLWRhc2hib2FyZC1hZG1pbiJ9.rPDqRpuVrQmTXJ7FHKz8isG4IqYs7aN5QgZCLVRLK1Ul0677pmPfFTgZpVzf5R9f6_h7kFLGFej9wq5QJ3NZn71jv1qhn9rYgabyZ9KsDZsE6SDwQQuHcVDz7iQ9prFezyqdiyBIxqoFEFhZyZe8pn-Ua53a7-P4Dm2xs2xMgbvrLt6b_b8--H_plV-6xrLKrM5BhG15HDi5MA7MXBZJzxTyuNC8CtjQ6ShuQFv5I3Fwqgugu9tqxGBk9Xjy82JGrdnvoSNRNThCMzlVZmClzrsT6CZ4BUw4t0x4dYhvbdo6IS5nnW0u_EOsaDdi1gqBjpeMX_tNT1ChqN55TnnPPw
[root@master55 k8s]#

image-20220214161808786

五、 KubeConfig登录

  1. 设置变量
[root@master55 k8s]# DASH_TOCKEN=$(kubectl get secret -n kube-system xincan-dashboard-admin-token-jkhl2 -o jsonpath={.data.token}|base64 -d)
  1. 加载设置访问地址
[root@master55 k8s]# kubectl config set-cluster kubernetes --server=https://kubernetes.docker.internal:6443 --kubeconfig=/k8s/xincan-dashbord-admin.conf
Cluster "kubernetes" set.
  1. 加载设置刚才设置的变量
[root@master55 k8s]# kubectl config set-credentials xincan-dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/k8s/xincan-dashbord-admin.conf
User "xincan-dashboard-admin" set.
[root@master55 k8s]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=xincan-dashboard-admin --kubeconfig=/k8s/xincan-dashbord-admin.conf
Context "dashboard-admin@kubernetes" created.
[root@master55 k8s]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/k8s/xincan-dashbord-admin.conf
Switched to context "dashboard-admin@kubernetes".
[root@master55 k8s]# ll
total 710420
-rw-r--r-- 1 root root     21079 Jul 12 22:24 calico-3.13.1.yaml
-rwxr-xr-x 1 root root      2051 Jul 16 14:40 get-k8s-images.sh
-rw-r--r-- 1 root root 727422976 Jul 16 14:57 k8s-imagesV1.18.5.tar
-rw-r--r-- 1 root root       901 Jul 16 15:54 k8s-token
drwxr-xr-x 3 root root      4096 Jul 16 17:47 kube-prometheus
drwxr-xr-x 2 root root        30 Jul 16 17:47 kubernetes-dashboard
-rw------- 1 root root      1321 Jul 16 17:01 xincan-dashbord-admin.conf
[root@master55 k8s]#


## 将生成后的xincan-dashbord-admin.conf复制到登录的主机,在需要时,上传这个配置文件即可

image-20220214161848822

image-20220214161922902

至此:Kubernetes面板、监控安装成功