k8s核心插件部署

198 阅读8分钟

此篇文档为生产系统K8S环境迁移服务,在全部文档中编号为6,内容为flannel,coredns,traefik,dashboard四个组件...

flannel

CNI网络插件,CNI(container network interface)解决容器在不同宿主机之间互通的工具。 本例中选用flannel,为什么不用calico?这个k8s集群本身规模不大,网络也不复杂,用flannel的host-gw模型就足够了,没必要用calico,搞bgp之类的,在这个环境用不上

flannel要部署在所有node节点上

  • 准备软件与目录
wget https://github.com/flannel-io/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz

mkdir /server/src/flannel-v0.12.0 -p

tar zxvf flannel-v0.12.0-linux-amd64.tar.gz -C /server/src/flannel-v0.12.0/

ln -s /server/src/flannel-v0.12.0 /server/k8s/flannel

mkdir -p /server/logs/k8s/flannel

cd /server/k8s/flannel
  • 配置环境变量文件/server/k8s/flannel/subnet.env
FLANNEL_NETWORK=10.8.0.0/16
FLANNEL_SUBNET=10.8.10.1/24  #本机子网
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

注意,subnet配置docker的daemon.json中bip配置相同

如果subnet配置错误,删除etcd中/coreos.com/network/subnets下相关内容

  • 启动脚本/server/k8s/flannel/startup-flannel.sh
#!/bin/sh
./flanneld \
    --public-ip=172.27.20.11#宿主机ip \
    --etcd-endpoints=https://172.27.0.20:2379,https://172.27.0.21:2379,https://172.27.0.22:2379 \
    --etcd-cafile /server/k8s/certs/ca.pem \
    --etcd-certfile /server/k8s/certs/client.pem \
    --etcd-keyfile  /server/k8s/certs/client-key.pem \
    --iface=eth0 \
    --subent-file=./subnet.env \
    --healthz-port=2401 
  • supervisor配置文件/etc/supervisord.d/flannel.ini
[program:flanneld-10.1]
command=/server/k8s/flannel/startup-flannel.sh
numprocs=1
dirctory=/server/k8s/flannel
autostart=true
autorestart=true
startsecs=30
startresties=3
exitcode=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/server/logs/k8s/flannel/flannel-run.log
stdout_file_maxbytes=200MB
stdout_file_backups=3
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
  • etcd操作
  1. 使用host-gw模型

在任意一台etcd服务器中

 etcdctl set /coreos.com/network/config '{"NetWork": "10.8.0.0/16", "Backend": {"Type": "host-gw"}}'  

host-gw原理,两台node 172.27.10.1和172.27.10.2上运行的容器ip段分别为10.8.1.0/24和10.8.2.0/24,10.8.1.1要访问10.8.2.1,首先要通过到172.27.10.2的静态路由才能访问,反之同样,即加静态路由到宿主机访问该宿主下运行的容器.

注意,host-gw模型需要所有node工作在同一个二层网络,这种通过静态路由访问的方式效率最高。如果需要三层网络互通,那么要使用VxLAN模型。

  1. 使用VxLAN模型
etcdctl set /coreos.com/network/config '{"NetWork": "10.8.0.0/16", "Backend": {"Type": "VxLAN"}}'  

VxLAN模型,大概原理是在网络1中建立一个虚拟设备flannel.1,所有要走其它三层路由的包在flannel.1中封包加头信息,通过flannel网络隧道到达目标网络2的flannel.1进行解包去头信息达到互通。这种工作模式因为有封包,解包操作,所有效率不高,这也是flannel广为诟病的原因

  1. 使用直接路由模型
etcdctl set /coreos.com/network/config '{"NetWork": "10.8.0.0/16", "Backend": {"Type": "VxLAN""Directrouting":  true}}' 

直接路由模型,即三层走VxLAN,二层走host-gw,两种模型混合使用,通过参数 Directrouting进行控制,这种模型在复杂网络环境中更适用

  • 启动flannel
supervisorctl start flanneld-10.1
  • 优化原地址snat转换

经过以上的操作,不同node间的容器已经可以互通,但是,在容器中的访问日志可以看到,容器间的访问地址为node地址,不是容器地址。下面要作的就是通过iptables操作,让容器间通讯使用容器ip非nodeip

systemctl start iptables
systemctl enable iptables

iptables -F
iptables -X 
iptables -Z 

iptables -t nat -I POSTROUTING -s 10.8.10.0/24 ! -d 10.8.0.0/24  ! -o docker0 -j MASQUERADE

iptables-save > /etc/sysconfig/iptables

iptables命令解释,来源地址为10.8.10.0/24,目标地址不是10.8.0.0/24且不是docker0设备出网的才去作snat转换

coredns

服务发现,服务之间相互定位的过程。在k8s集群中,pod的ip不断变化,无法如传统架构中使用ip:port,host:port等方法,为解决此类问题,k8s抽象出service资源,通过标签选择器,关联一组pod,抽象出集群网络,通过相对固定的集群IP,使服务接入点固定。自动关联service资源的“名称”和“集群网络ip”,达到服务被集群自动发现的目的,引入coredns插件

注意,k8s中的dns只负责自动维护 serive -> ClusterIP 之间的关系,于dns服务无关

注意,从这个插件开始,使用容器交付的方式进行部署

  • 准备工件
  1. nginx

在运维机10.10,配置一个nginx虚拟主机,把yaml变成raw格式, 用来提供k8s资源配置清单的访问入口,所有资源配置清单都保存在此,方便管理

mkdir -p /server/www/k8s-yaml
server {
  	   listen     80;
  	   server_name k8s-yaml.ylls.com;
  	   
  	   location / {
  	   	
  	   	autoindex on;
  	   	default_type text/plain;
  	   	root /server/www/k8s-yaml;
  	   }
  }
  1. 解析dns

在bind机上/server/named/ylls.com.zone

 $TTL 1D
@       IN SOA  ylls.com. email.com. (
                                        2      ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
                IN  NS   master
test  IN  A    172.17.0.2
harbor.ylls.com  172.27.10.10 #运维用nginx,这台nginx用来实现非业务功能
k8s-yaml.ylls.com  172.27.10.10

注意,记得滚动serial

  1. 准备docker镜像
docker pull corddns/coredns:1.6.5
  
docker tag b1o4 harbor.ylls.com/base/coredns:1.6.5
  
docker push  
  1. 准备资源配置清单
mkdir -p /server/www/k8s-yaml/coredns

cd /server/www/k8s-yaml/coredns

rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

depolyment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
   strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
      	k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: harbor.ylls.com/public/coredns:1.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 160Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
          	add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 192.168.254.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
  • 部署coredns
kubectl apply -f http://k8s-yaml.ylls.com/coredns/rbac.yaml   	
     	
kubectl apply -f http://k8s-yaml.ylls.com/coredns/configmap.yaml
  
kubectl apply -f http://k8s-yaml.ylls.com/coredns/deployment.yaml
   
kubectl apply -f http://k8s-yaml.ylls.com/coredns/service.yaml
  • 验证coredns
kubectl create ns kube-test

kubectl create deploy nginx-coredns --image=harbor.ylls.com/base/nginx:v1.9.0  -n  kube-test    	
     	
kubectl expose deploy nginx-coredns --port=80 -n kube-test

dig -t  A nginx-test.kube-test.svc.cluster.local. @192.168.254.1 +short

注意,上条命令中nginx-test.kube-test.svc.cluster.local.为nginx-test的dns全名,在集群外无法使用

fraefik

服务暴露,pod在其生命周期中会有一个固定ip,这个ip通常是docker管理的虚拟的二层网络,外部无法访问。且pod生命周期不固定,同一种服务所包含的pod算也不固定。为了访问到某服务,k8s引入service概念。service会有一个固定的clusterIP, clusterIP通过标签选择器绑定一个或者一组pod提供访问。但是service的clusterIP同样也是一个k8s管理、分配的虚拟ip,在集群外无法访问,为了解决此问题,k8s引入了服务暴露概念

k8s中的服务暴露的几种常用方式

  • hostport,这是一种直接定义Pod网络的方式,是直接将容器的端口与所调度的节点上的端口路由,这样用户就可以通过宿主机的IP加上来访问Pod

  • nodeport,可以将pod的ip:port通过iptables映射到集群外,再通过nginx之类的lb进行负载均衡,目前多用于4层,nodePort有默认端口范围,30000-32767,这个值在API server的配置文件中,用--service-node-port-range定义,这种服务暴露方式,无法让你指定自己想要的应用常用端口,不过可以在集群上再部署一个反向代理作为流量入口。

注意,使用nodeport的service无法使用kube-proxy的ipvs模型,只能使用iptables,kube-proxy会自动将流量以round-robin的方式转发给该service的每一个pod。

  • LoadBalancer,只能在service上定义。这是公有云提供的负载均衡器,这里不考虑

  • Ingress,是自kubernetes1.1版本后引入的资源类型,必须要部署Ingress controller才能创建Ingress资源,Ingress controller是以一种插件的形式提供。Ingress controller 是部署在Kubernetes之上的Docker容器。它的Docker镜像包含一个像nginx或HAProxy的负载均衡器和一个控制器守护进程。控制器守护程序从Kubernetes接收所需的Ingress配置。它会生成一个nginx或HAProxy配置文件,并重新启动负载平衡器进程以使更改生效。换句话说,Ingress controller是由Kubernetes管理的负载均衡器。

部署Ingress-controller fraefik

  • 准备工作

在运维机10.10上

  1. docker镜像
docker pull traefik:v1.7.2-alpine
docker tag 0fc2 harbor.ylls.com/base/traefik:1.7.2
docker push
  1. 准备目录
 cd /server/www/k8s-yaml
 mkdir traefilk
 cd traefilk
  1. rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata: 
       name: traefilk-ingress-controller
       namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  	   name: traefik-ingress-controller
 rules:
  -  apiGroups:
  	 - ""	
     resources:
     	 - services
     	 - endpoints
         - secrets
     verbs:
         - list
         - watch
         - get
   - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - extensions
    resources:
    - ingresses/status
    verbs:
    - update      
---   	
apiVsersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: 
     name: traefik-ingress-controller
roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: traefik-ingress-controllersubjects:
 subjects: 
-   kind: ServiceAccount
    name:  traefik-ingress-controller
    namespace: kube-system
  1. daemonset.yaml

daemon型意味每个node都启动一个pod,这里只是使用这种方式作例子,其实没必要运行那么多

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-ingress
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress
      name: traefik-ingress
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress
        name: traefik-ingress
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: harbor.ylls.com/base/traefik:1.7.2
        name: traefik-ingress
        ports:
        - name: controller
          containerPort: 80
          hostPort: 81
        - name: admin-web
          containerPort: 8080
     securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
        -  --insecureskipverify=true
        -  --kubernetes.endpoint=https://172.27.10.19:7443
        -  --accesslog
        -  --accesslog.filepath=/var/log/traefik_access.log   
        -  --traefiklog
        -  --traefiklog.filepath=/var/log/traefik.log
        -  --metrice.prometheus
  1. service.yaml
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress
  ports:
    - protocol: TCP
      port: 80
      name: controller
    - protocol: TCP
      port: 8080
      name: admin-web

  1. ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
     name: traefik-ingress-service
     namespace: kube-system
     annotations: 
          kubernetes.io/ingress.class: traefik
spec:
    rules:
    - host: traefik.ylls.com
      http:
      	  paths: 
      	  -  path: /
      	     backend:
      	  	      serviceName: traefik-ingress-service
      	  	      servicePort: 8080 		
  • 部署fraefik

在node机上

kubectl apply -f http://k8s-yaml.ylls.com/ftaefik/rbac.yaml   	
     	
kubectl apply -f http://k8s-yaml.ylls.com/ftaefik/deamonset.yaml
  
kubectl apply -f http://k8s-yaml.ylls.com/ftaefik/service.yaml
   
kubectl apply -f http://k8s-yaml.ylls.com/ftaefik/ingress.yaml
  • dns解析

在运维机上

  $TTL 1D
@       IN SOA  ylls.com. email.com. (
                                        3     ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
                IN  NS   master
test  IN  A    172.17.0.2
harbor.ylls.com  172.27.10.10 #运维用nginx,这台nginx用来实现非业务功能
k8s-yaml.ylls.com  172.27.10.10
traefik.ylls.com  172.27.10.1 #业务调度用两台nginx的vip 
  • nginx配置

在业务总调度nginx,172.27.10.1和172.27.10.2上

 upstream traefik{
  	server 172.27.10.5:81 max_fails=3 fail_timeout=10s;
        server 172.27.10.6:81 max_fails=3 fail_timeout=10s;
        #太多不写
  }
   server {
  	  
  	   server_name *.ylls.com;
  	   
  	   location / {
  	   	
  	   	proxy_pass http://traefik;
  	   	proxy_set_header Host  $http_host;
  	   	proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
  	   }
  }

注意,以上这段nginx配置中server_name中写到*.ylls.com,这就意味着所有针对ylls.com这个业务域的 流量全部转发到后端upstream traefik中进行处理,也就是traefik中,这台nginx针对*.ylls.com流量只作7层负载均衡。但是ingress只是一个简化版的nginx,无法实现复杂操作,比如rewirte,那么针对这些个性化操作还是要在nginx上完成,根据nginx的规则,详细优先于通配,编写一个详细域名的配置就可以解决个性化操作,此操作在之后dashboard中有明确操作

Dashboard

  • 准备工作
  1. docker镜像
docker pull k8scn/kubernetes-dashboard-amd64:v1.10.1

docker tag da42 harbor.ylls.com/base/k8s-dashboard:1.10.1

docker push
  1. 目录操作
mkdir /server/www/k8s-yaml/k8s-dashboard/

cd /server/www/k8s-yaml/k8s-dashboard/
  1. rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata: 
     labels:
        k8s-app: kubernetes-dashboard
     	addonmanager.kubernetes.io/mode: Reconcile
     name: kubernetes-dashboard-admin
     namespace: kube-system         	
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
  1. dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image:  harbor.ylls.com/base/k8s-dashboard:1.10.1
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        env: 
          - name: ACCEPT_LANGUAGE 
            value: english
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
  1. svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443
    targetPort: 8443
  1. ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
     name: kubernetes-dashboard
     namespace: kube-system
     annotations: 
          kubernetes.io/ingress.class: traefik
spec:
    rules:
    - host: dashboard.ylls.com
      http:
      	  paths: 
      	  -  path: /
      	     backend:
      	  	      serviceName: kubernetes-dashboard
      	  	      servicePort: 443 		
  • 部署Dashboard
kubectl apply -f http://k8s-yaml.ylls.com/k8s-dashboard/rbac.yaml
kubectl apply -f http://k8s-yaml.ylls.com/k8s-dashboard/dp.yaml
kubectl apply -f http://k8s-yaml.ylls.com/k8s-dashboard/svc.yaml
kubectl apply -f http://k8s-yaml.ylls.com/k8s-dashboard/ingress.yaml
  • dns解析
 $TTL 1D
@       IN SOA  ylls.com. email.com. (
                                        4     ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
                IN  NS   master
test  IN  A    172.17.0.2
harbor.ylls.com  172.27.10.10 #运维用nginx,这台nginx用来实现非业务功能
k8s-yaml.ylls.com  172.27.10.10
traefik.ylls.com  172.27.10.1 #业务调度用两台nginx的vip 
dashboard.ylls.com 172.27.10.1      
  • Dashboard证书

证书配置文件dashboard-csr.json

{
    "CN": "dashboard",
       "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "ylls",
            "OU": "ops"
        }
    ]
}

生成证书

 ./cfssl gencert -ca=ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=server dashbroad-csr.json|cfssl-json -bare dashboard
  • nginx配置

复制证书

 scp ca.pem dashboard.pem dashboard-key.pem root@172.27.10.1:/server/nginx/conf/certs/

 scp ca.pem dashboard.pem dashboard-key.pem root@172.27.10.2:/server/nginx/conf/certs/

配置文件/server/nginx/conf/conf.d/dashboard.com.conf

     server {
    	listen 80;
    	server_name dashboard.ylls.com;
    	rewrite ^(.*)$ https://${server_name}$1 permanent;
    }    
        
     server{
     	listen 443 ssl;
     	server_name dashboard.ylls.com;
     	ssl_certificate   /server/nginx/conf/cert/dashboard.pem;
        ssl_certificate_key  /server/nginx/conf/cert/dashboard-key.pem;
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout 10m;
        ssl_ciphers HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;
         
         location / {
  	   	
  	   	proxy_pass http://traefik;
  	   	proxy_set_header Host  $http_host;
  	   	proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
  	   }
 
   }   

这里因为有rewrite和ssl功能,ingress控制器无法处理,所以写了一个dashboard.ylls.com的详细域名,rewrite和ssl使用了nginx本身进行处理,流量转发到traefik

  • heapster all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
subjects:
  - kind: ServiceAccount
    name: heapster
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: harbor.ylls.com/public/heapster:1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb:8086

---

apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
  • 使用token认证登陆Dashboard

查看token

kubectl get secret -n kube-system

使用token

kubectl describe secret kubernetes-dashboard-admin-token-bwr3k2  -n kube-system

kubernetes-dashboard-admin-token-* 集群管理员权限

复制token到dashboard即可登陆

注意,因我们在dashboard的rbac.yaml中使用的serviceaccount绑定了cluster-admin角色且使用了kubernetes-dashboard-admin-token这个令牌,所以进入dashboard后会有集群最高权限

如果要分权操作,那么应该新建角色绑定相应的权限,比如只读全局,只读某namespace等,再次分发令牌

至此,k8s环境已经部署完成,以下简单说一下rbac相关的一点东西

  • 使用kubeconfig登陆
DASH_TOCKEN=$(kubectl get secret -n kube-system kubernetes-dashboard-admin-token-bw3k2  -o jsonpath={.data.token}|base64 -d)

kubectl config set-cluster kubernetes --server=https://172.27.10.19:7443 --kubeconfig=/root/dashbord-admin.conf

kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/root/dashbord-admin.conf

kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashbord-admin.conf

kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashbord-admin.conf

之后把生成的dashboard-admin.conf文件保存在本地使用

RBAC

k8s自1.6版本起默认使用rbac(基于角色的访问控制),之前还有abac(基于属性的访问控制)和webhook等鉴权机制

账户在k8s中分两种,用户帐户UserAccount,简称ua、user和服务帐户ServiceAccount,简称sa k8s中运行的所有pod都必需有一个serviceaccount,如果没有指定sa,那么k8s自动指定为"default"

绑定角色分两种, Role普通角色,只能应用在某一个特定的名称空间下 ClusterRole集群角色,可以应用到整个集群

绑定角色操作分两种,RoleBinding和ClusterRoleBinding

角色绑定权限(verbs)

常用权限: 读 read 写 write 更新 update 列出 list 监视 watch

帐户绑定角色,角色绑定权限

k8s鉴权操作的思路一般为,第一创建帐户,第二创建角色,第三为角色受权,最后绑定用户角色

k8s集群中有很多已经定义好的角色,比如在之前安装dashboard的rbac.yaml中直接定义了一个名为kubernetes-dashboard-admin的ServiceAccount,kubernetes-dashboard-admin直接RoleBing名为Cluster-admin的k8s定义集群角色

Cluster-admin拥有k8s集群的最高权限

查看k8s集群角色

kubectl get clusterrole [集群角色名] [-o yaml]