[K8S] 学习笔记

49,162 阅读14分钟

1. Rancher 安装 k8s

1.1 安装

yum install -y yum-utils
yum-config-manager  --add-repo  https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.09.1 docker-ce-cli-18.09.1 containerd.io
make /etc/docker/ ; cd /etc/docker
cat >> daemon.json < EOF
{
  "registry-mirrors": [
    "https://hub-mirror.c.163.com"
  ]
}
EOF
systemctl daemon-reload
systemctl restart docker
mkdir -p /opt/rancher
docker run -d --privileged --restart=unless-stopped -p 8080:80 -p 8443:443 -v /opt/rancher:/var/lib/rancher/ rancher/rancher:stable

1.2 访问

  • 首次安装比较慢
https://172.17.38.31:8443/

1.3 安装docker-compose

curl -L https://github.com/docker/compose/releases/download/1.26.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

1.4 安装 registry

mkdir -p /opt/registry ;cd /opt/registry
mkdir -p /opt/registry/data
cat >> config.yml < EOF
version: 0.1
log:
 fields:
 service: registry
storage:
 delete:
  enabled: true
 cache:
  blobdescriptor: inmemory
 filesystem:
  rootdirectory: /var/lib/registry
http:
 addr: :5000
 headers:
  X-Content-Type-Options: [nosniff]
health:
 storagedriver:
 enabled: true
 interval: 10s
 threshold: 3
 EOF
 
docker run  -d --name registry \
 -v /opt/registry/config.yml:/etc/docker/registry/config.yml \
 -v /opt/registry/data:/var/lib/registry \
-p 5000:5000   registry 

2.Kubeadm 安装 k8s

2.1 组件介绍

Kubernetes主要由以下几个核心组件组成:

  • etcd 保存了整个集群的状态;
  • apiserver 提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
  • controller manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
  • scheduler 负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
  • kubelet 负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
  • container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
  • kube-proxy 负责为Service提供cluster内部的服务发现和负载均衡;

2.2 kubeadm 安装

  • yum repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache
  • 安装二进制 (所有节点都有)
 yum -y install kubelet-1.20.2  kubeadm-1.20.2  kubectl-1.20.2
 rpm -aq kubelet kubectl kubeadm  // 检查

2.2.1 安装k8s

  • kubectl 自动补全,没有这个东西真的敲到手抽筋

    yum install -y bash-completion
    echo '/usr/share/bash-completion/bash_completion' >> ~/.bashrc
    echo 'source <(kubectl completion bash)' >> ~/.bashrc
    source /usr/share/bash-completion/bash_completion
    kubectl completion bash >/etc/bash_completion.d/kubectl
    source ~/.bashrc
    
  • 所有节点都有

    // 添加启动项
    systemctl enable docker.service
    systemctl enable kubelet
    systemctl restart kubelet
    
  • 修正cgroup driver,修改成与kubelet 一致的 systemd。systemd 与 cgroupfs 最大的区别在于cgroupfs 是可以直接把 pid 写入对应的一个 cgroup 文件;当systemd 并不允许直接修改cgroup文件

    vi /etc/docker/daemon.json
    {
        "registry-mirrors": [
            "https://hub-mirror.c.163.com"
        ],
        "exec-opts": ["native.cgroupdriver=systemd"]
    }
    systemctl daemon-reload
    systemctl restart docker
    
    docker info |grep Cgroup  // 应该能看到systemd
    
  • 关闭swap

    swapoff -a
    
    vi /etc/fstab   // 注释调swap
    #/dev/mapper/centos-swap swap                    swap    defaults        0 0
    
  • 网络配置

    echo 1 > /proc/sys/net/ipv4/ip_forward
    modprobe br_netfilter
    echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
    
    
  • 可选操作 一些网络配置的清理重置

    iptables -P INPUT ACCEPT  # 先接受所有的访问
    # 清空4张表
    iptables -F -t nat
    iptables -F -t filter
    iptables -F -t mangle
    iptables -F -t raw
    # 关闭cni0
    ifconfig cni0 down 
    ifconfig flannel.1 down
    # 删除网络设备
    ip link delete cni0
    ip link delete flannel.1
    
  • 初始化集群

    # 拉取集群
    kubeadm config images pull
    
    # 初始化集群
    kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --kubernetes-version=1.20.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
    
    # 配置kube config
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
     
    # 打印加入集群的命令
    kubeadm token create --print-join-command
    
  • 安装flannel 及 加入node节点

    需要确保/etc/cni下没有别的cni文件,有则需要删除已经存在的文件

    # 下载fannel yaml文件
    cd /etc/kubernetes/manifests/
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
        
    # 所有节点 ,设置系统参数(所有节点都有)
    sysctl net.bridge.bridge-nf-call-iptables=1
    
    # 修改cird,网段设置
    vim kube-controller-manager.yaml
    - kube-controller-manager
    ...
    - --allocate-node-cidrs=true
    - --cluster-cidr=10.244.0.0/16
    
    # 重启kubelet
    systemctl restart kubelet
    
    # 安装fannel
    kubectl apply -f kube-flannel.yml
    
    # 主节点上去掉 master 这个污点,使其可以被pod 调度到该上面
    kubectl taint nodes --all node-role.kubernetes.io/master-
    
    # 到子节点上执行
    kubeadm join 172.17.38.31:6443 --token bw7b6m.n8vovrudr9wezsp4     --discovery-token-ca-cert-hash sha256:83cee948268d5909f77c7478abfc647623d229a09be36003ab0496a44c0172b5
    
    # 节点是否ready
    kubectl get nodes
    
  • 处理组件状态

    # 获取组件状态
    kubectl get cs 
    NAME                 STATUS      MESSAGE                                                                                     ERROR
    controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
    scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
    etcd-0               Healthy     {"health":"true"}
    
    • 去掉 controller-manager 和 scheduler 配置中的 --port=0

2.2.2 安装 Dashboard

  • 安装dashboard

    mkdir -p /opt/kubeadm/dashboard
    cd /opt/kubeadm/dashboard
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
    
  • svc.yml

    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30043
      selector:
        k8s-app: kubernetes-dashboard
      type: NodePort
    
  • account.yml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
      
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: admin-user
        namespace: kubernetes-dashboard
    
  • 应用 YML

    cd /opt/kubeadm/dashboard
    kubectl apply -f .
    
  • 访问 Dashboard

    • 浏览器访问 https://172.17.38.31:30043

    • 获取admin-user 的token,用于登陆Dashboard

      kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
      

2.2.3 驱逐节点

kubectl drain shadowyd-k8s-01  --delete-local-data --force --ignore-daemonsets
kubectl delete nodes shadowyd-k8s-01

3. K8s 概念与操作

3.1 Account

3.1.1 UserAccount

  • 通过RBAC规则为用户进行角色绑定,权限绑定角色;

  • 先创建用户账号需要的证书文件;

    mkdir -p /opt/kubeadm/users/ua/shadow_yd
    cd !$
    
    # 生成pk
    openssl genrsa -out client.key 2048
    
    # 生成申请
    openssl req -new -key client.key -out client.csr -subj "/CN=shadow_yd"
    
    # 使用CA为申请签发证书, 10年有效期
    openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out client.crt -days 3650
    
    # 查看证书的信息
    openssl x509 -in client.crt -text
    
    # 仅查看证书的归属CN
    openssl x509 -noout -subject -in client.crt
    
    # 打印开始和过期时间
    openssl x509 -noout -dates  -in apiserver.crt
    
    
  • 为当前集群配置文件增加用户,如需要为别的kubeconfig 增加用户 --kubeconfig=;

    • --embed-certs=true 是将证书嵌套在文件内,默认只会配置crt和key路径
    # 将用户加入到默认配置 ~/.kube/config
    kubectl config  set-credentials shadow_yd  --client-certificate=/opt/kubeadm/user_account/shadow_yd/client.crt --client-key=/opt/kubeadm/user_account/shadow_yd/client.key --embed-certs=true
    
    • 设置context,即为用户绑定对应的集群,甚至指定集群namespace --namespace=
    kubectl config set-context  shadow_yd@kubernetes --cluster=kubernetes --user=shadow_yd
    
    • 切换 contenxt
    kubectl config view
    # 由于用户并没有绑定任何权限所以无法操作集群;
    kubectl config use-context shadow_yd@kubernetes
    

3.1.2 RBAC

  • 这里的角色分为两种 RoleClusterRole

    • Role 是作用在 namespace 中 (像在一个租户内);
    • ClusterRole 是作用在整个集群,类似全局;
  • 角色与用户进行绑定依赖于 RoleBindingClusterRoleBinding

    • RoleBinding 和可以绑定 RoleClusterRole, 但都仅作用于 namespace;
    • ClusterRoleBinding 仅可以绑定 ClusterRole
Role & RoleBinding
  • 延续上面创建的shadow_yd 用户及 context

    cd /opt/kubeadm/user_account/shadow_yd
    
  • role_pods.yml ; 创建具有在default namespace 下查询pods

    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      namespace: default
      name: r-shadowyd-pods
    
    rules:
      - verbs: ["get", "list", "watch"]
        apiGroups:
          - "*"
        resources:
          - "pods"
    
  • rolebingding_pods.yml 绑定角色与用户

    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      creationTimestamp: null
      name: rb-shadowyd-pods-r
      namespace: default
    
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: r-shadowyd-pods
    
    subjects:
      - kind: User
        name: shadow_yd
        apiGroup: rbac.authorization.k8s.io
    
  • 切换context 进行验证

    kubectl apply -f role_pods.yml rolebingding_pods.yml 
    kubectl config use-context shadow_yd@kubernetes
    > Switched to context "shadow_yd@kubernetes".
    # 能正常显示default下的pods
    kubectl get pods
    
ClusterRole & ClusterRoleBinding
  • 先删除刚刚bind 的权限

    kubectl delete -f rolebingding_pods.yml 
    
  • clusterrole_pods.yml , 这个是全局的角色

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: cr-shadowyd-pods
    
    rules:
      - verbs: ["get", "list", "watch"]
        apiGroups: ["*"]
        resources:
          - "pods"
    
    
  • clusterrolebingding_pods.yml,绑定用户和角色

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: crb-shadowyd-pods
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cr-shadowyd-pods
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: shadow_yd
    
  • 切换context 进行验证

    kubectl apply -f clusterrole_pods.yml  clusterrolebingding_pods.yml
    kubectl config use-context shadow_yd@kubernetes
    > Switched to context "shadow_yd@kubernetes".
    # 能正常显示所有namespace下的pods
    kubectl get pods -A
    
  • 下面再测试一下 RoleBinding 绑定 ClusterRole

    • 删除 crb-shadowyd-pods

      kubectl delete -f clusterrolebingding_pods.yml
      
    • rolebingding_bind_clusterrole.yml

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: rb-shadowyd-pods-cr
        namespace: default
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cr-shadowyd-pods
      subjects:
        - apiGroup: rbac.authorization.k8s.io
          kind: User
          name: shadow_yd
      
    • 切换context 进行验证

      kubectl apply -f rolebingding_bind_clusterrole.yml
      kubectl config use-context shadow_yd@kubernetes
      > Switched to context "shadow_yd@kubernetes".
      # 能正常显示default下的pods
      kubectl get pods
      

3.1.3 API Token 访问集群

  • 创建一个16位的token

    head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    > 34f957ca7aa7c498f7c92650e7d1fa19
    
  • 设置user_account 可以使用 token 访问

    kubectl config set-credentials shadow_yd --toekn=34f957ca7aa7c498f7c92650e7d1fa19
    
    # 编辑auth_token 文件
    vim /etc/kubernetes/pki/token_auth
    34f957ca7aa7c498f7c92650e7d1fa19,shadow_yd,1001
    
    # 编辑配置文件,编辑完成后,等待重启apiserver
    vim kube-apiserver.yml
    command:
         ...
         - --token-auth-file=/etc/kubernetes/pki/token_auth
    
  • 进行访问测试

    curl -H "Authorization: Bearer 34f957ca7aa7c498f7c92650e7d1fa19" https://172.17.38.31:6443/api/v1/namespaces/default/pods -k
    

3.1.4 ServiceAccount

  • 其实ServiceAccount 和 UserAccount 是很类似的,区别在于ServiceAccount被指定后可以在Pod内可以查看到其使用的token。 /var/run/secrets/kubernetes.io/serviceaccount/token

  • 创建SA,sa_shadow_yd.yml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: shadow-yd
      namespace: default
    
  • ClusterRoleBinding绑定之前的ClusterRole cr-shadowyd-pods, 使其可以访问所有namespace 下的pod;

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: crb-shadow-yd
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cr-shadowyd-pods
    subjects:
      - kind: ServiceAccount
        name: shadow-yd
        namespace: default
    
  • 只是演示 RoleBinding也是同样道理这里只做演示;

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: rb-shadow-yd
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: r-shadowyd-pods
    subjects:
      - kind: ServiceAccount
        name: shadow-yd
        namespace: default
    
  • 启动一个pod,进去验证SA的token是否能通过API请求 api-server

    • test-pod.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: jt-goapi
        namespace: default
      spec:
        selector:
          matchLabels:
            app: jt-goapi
        template:
          metadata:
            labels:
              app: jt-goapi
          spec:
            serviceAccountName: shadow-yd
            containers:
              - name: jt-nginx
                image: nginx:1.18-alpine
                imagePullPolicy: IfNotPresent
                ports:
                  - containerPort: 80
      
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: jt-goapi-svc
      
      spec:
        selector:
          app: jt-goapi
        type: NodePort
        ports:
          - port: 8080
            targetPort: 80
            nodePort: 31000
      
      
    • 进入 pods

      kubectl exec -it jt-goapi-6688bc85b4-bpbfp /bin/sh
      
    • 组合一下内置变量,这个是pods启动的时候被注入的

      APISERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT"
      TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
      CA_CERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      
      # 这个path是存放一些密钥等认证信息
      ls /var/run/secrets/kubernetes.io/serviceaccount/
      > ca.crt     namespace  token
      
    • 访问api-server

      # 访问kube-system 空间下的pods
      curl --header "Authorization: Bearer $TOKEN" --cacert $CA_CERT -s $APISERVER/api/v1/namespaces/kube-system/pods
      
      # 访问default空间下的pods
      curl --header "Authorization: Bearer $TOKEN" --cacert $CA_CERT -s $APISERVER/api/v1/namespaces/defult/pods
      

3.2 Volumes

3.2.1 init容器与共享目录


apiVersion: apps/v1
kind: Deployment
metadata:
  name: jt-goapi

spec:
  selector:
    matchLabels:
      app: jt-goapi
  template:
    metadata:
      labels:
        app: jt-goapi
    spec:
      initContainers:
        - name: busy-box
          image: busybox
          volumeMounts:
            - mountPath: /data
              name: share-dir

          command: ["sh", "-c", "echo hello > /data/a.txt"]

      containers:
        - name: jt-nginx
          image: nginx:1.18-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80

          volumeMounts:
            - mountPath: /data
              name: share-dir
      volumes:
        - name: share-dir
          emptyDir: {}


---
apiVersion: v1
kind: Service
metadata:
  name: jt-goapi-svc

spec:
  selector:
    app: jt-goapi
  type: NodePort
  ports:
    - port: 8080
      targetPort: 80
      nodePort: 31000


  • 执行并检查
kubectl apply -f .
kubectl exec -it <pod-id> cat /data/a.txt

3.2.2 hostPath

  • 后面补上

3.3 ConfigMap

3.3.1 创建一个configmap

kind: ConfigMap
apiVersion: v1
metadata:
  name: cm-shadow

data:
  username: ShadowYD
  age: 18
  user.info: |
    hello world:

      This is a file test.

3.3.2 Env, Directory, File

  • 创建一个pod,该pod 包含三种形态
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: jt-goapi
    
    spec:
      selector:
        matchLabels:
          app: jt-goapi
      template:
        metadata:
          labels:
            app: jt-goapi
        spec:
          initContainers:
            - name: busy-box
              image: busybox
              volumeMounts:
                - mountPath: /data
                  name: share-dir
    
              command: ["sh", "-c", "echo hello > /data/a.txt"]
    
          containers:
            - name: jt-nginx
              image: nginx:1.18-alpine
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
    
              volumeMounts:
                - mountPath: /share_data
                  name: share-dir
    
                # mount file - 第一种方式
                - mountPath: /config_data
                  name: config-data
    
                # mount dir, dir 下是所有的 key 文件
                - mountPath: /config_all_keys
                  name: all-config-key
    
                # mount file - 第二种方式
                - mountPath: /config_single_key/user_info
                  name: all-config-key
                  subPath: user.info
    
    
              # Env
              env:
                - name: USERNAME
                  valueFrom:
                    configMapKeyRef:
                      name: cm-shadow
                      key: username
    
    
    
          volumes:
            - name: share-dir
              emptyDir: {}
    
            - name: config-data
              configMap:
                name: cm-shadow
                items:
                  - key: user.info
                    path: user_info
                    mode: 0644
    
            - name: all-config-key
              configMap:
                name: cm-shadow
    
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: jt-goapi-svc
    
    spec:
      selector:
        app: jt-goapi
      type: NodePort
      ports:
        - port: 8080
          targetPort: 80
          nodePort: 31000
    

3.3.3 ConfigMap Api 调用

3.3.3.1 集群外调用
  • 先启动api代理,无鉴权调用

    kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8009
    
    package main
    
    import (
       "context"
       "fmt"
       "k8s.io/apimachinery/pkg/apis/meta/v1"
       "k8s.io/client-go/kubernetes"
       "k8s.io/client-go/rest"
       "log"
    )
    
    func getClient() *kubernetes.Clientset {
       config := &rest.Config{
          Host: "http://172.17.38.31:8009",
       }
       c, err := kubernetes.NewForConfig(config)
       if err != nil {
          log.Fatal(err)
       }
    
       return c
    }
    
    func main() {
       cm, err := getClient().CoreV1().ConfigMaps("default").
          Get(context.Background(), "cm-shadow", v1.GetOptions{})
       if err != nil {
          log.Fatal(err)
       }
       for k, v := range cm.Data {
          fmt.Printf("key=%s,value=%s\n", k, v)
       }
    }
    
3.3.3.2 集群内调用
  1. 先要创建一个sa账户,用回之前的shadow-yd

  2. 创建一个可以访问configmaps资源的角色,并进行角色绑定

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: cr-shadowyd-configmaps
    
    rules:
      - verbs: ["get", "list", "watch"]
        apiGroups: ["*"]
        resources:
          - "configmaps"
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: crb-shadow-configmaps
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cr-shadowyd-configmaps
    subjects:
      - kind: ServiceAccount
        name: shadow-yd
        namespace: default
    
  3. 交叉编译一下以下代码,并上传到pod 中nodeName指定的节点与目录

    package main
    
    import (
       "context"
       "fmt"
       "io/ioutil"
       "k8s.io/apimachinery/pkg/apis/meta/v1"
       "k8s.io/client-go/kubernetes"
       "k8s.io/client-go/rest"
       "log"
       "os"
    )
    var api_server string
    var token string
    func init() {
       api_server=fmt.Sprintf("https://%s:%s",
          os.Getenv("KUBERNETES_SERVICE_HOST"),os.Getenv("KUBERNETES_PORT_443_TCP_PORT"))
       f,err:=os.Open("/var/run/secrets/kubernetes.io/serviceaccount/token")
       if err!=nil{
          log.Fatal(err)
       }
       b,_:=ioutil.ReadAll(f)
       token=string(b)
    }
    
    func getInnerClient() *kubernetes.Clientset{
       config:=&rest.Config{
          //Host:"http://124.70.204.12:8009",
          Host:api_server,
          BearerToken:token,
          TLSClientConfig:rest.TLSClientConfig{CAFile:"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"},
       }
       c,err:=kubernetes.NewForConfig(config)
       if err!=nil{
          log.Fatal(err)
       }
    
       return c
    }
    
    func main() {
       cm,err:=getInnerClient().CoreV1().ConfigMaps("default").
          Get(context.Background(),"cm-shadow",v1.GetOptions{})
       if err!=nil{
          log.Fatal(err)
       }
       for k,v:=range cm.Data{
          fmt.Printf("key=%s,value=%s\n",k,v)
       }
       select {}
    }
    
    CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o app inner_call.go 
    
  4. 配置一个deploy 去启动👆的程序;下面的配置是用之前的配置改的,主要加入hostPath直接启动程序。

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: jt-goapi
    
    spec:
      selector:
        matchLabels:
          app: jt-goapi
      template:
        metadata:
          labels:
            app: jt-goapi
        spec:
          nodeName: shadowyd-k8s-01
          serviceAccountName: shadow-yd
          initContainers:
            - name: busy-box
              image: busybox
              volumeMounts:
                - mountPath: /data
                  name: share-dir
    
              command: ["sh", "-c", "echo hello > /data/a.txt"]
          containers:
            - name: jt-nginx
              image: alpine
              imagePullPolicy: IfNotPresent
              command: ["/workdir/app"]
    
              ports:
                - containerPort: 80
    
              volumeMounts:
                - mountPath: /share_data
                  name: share-dir
    
                # mount file - 第一种方式
                - mountPath: /config_data
                  name: config-data
    
                # mount dir, dir 下是所有的 key 文件
                - mountPath: /config_all_keys
                  name: all-config-key
    
                # mount file - 第二种方式
                - mountPath: /config_single_key/user_info
                  name: all-config-key
                  subPath: user.info
    
                # hostpath
                - name: app-dir
                  mountPath: /workdir
    
    
              # Env
              env:
                - name: USERNAME
                  valueFrom:
                    configMapKeyRef:
                      name: cm-shadow
                      key: username
    
    
    
          volumes:
            - name: app-dir
              hostPath:
                path: /opt/kubeadm
            - name: share-dir
              emptyDir: {}
    
            - name: config-data
              configMap:
                name: cm-shadow
                items:
                  - key: user.info
                    path: user_info
                    mode: 0644
    
            - name: all-config-key
              configMap:
                name: cm-shadow
    
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: jt-goapi-svc
    
    spec:
      selector:
        app: jt-goapi
      type: NodePort
      ports:
        - port: 8080
          targetPort: 80
          nodePort: 31000
    
  5. 最后log查看下是否打印了configmaps 的配置

    kubectl logs jt-goapi-6bf855bc6f-scl94
    > key=age,value=18
    > key=user.info,value=hello world:
    >   This is a file test.
    
    > key=username,value=ShadowYD
    
  6. 一个监控ConfigMap 变化的程序,使用informers

    package main
    
    import (
       "k8s.io/api/core/v1"
       "k8s.io/apimachinery/pkg/util/wait"
       "k8s.io/client-go/informers"
       "k8s.io/client-go/kubernetes"
       "k8s.io/client-go/rest"
       "log"
    )
    
    func getCMClient() *kubernetes.Clientset {
       config := &rest.Config{
          Host: "172.17.38.31:8009",
       }
       c, err := kubernetes.NewForConfig(config)
       if err != nil {
          log.Fatal(err)
       }
    
       return c
    }
    
    type CmHandler struct{}
    
    func (this *CmHandler) OnAdd(obj interface{}) {}
    func (this *CmHandler) OnUpdate(oldObj, newObj interface{}) {
       if newObj.(*v1.ConfigMap).Name == "mycm" {
          log.Println("mycm发生了变化")
       }
    }
    func (this *CmHandler) OnDelete(obj interface{}) {}
    
    func main() {
    
       fact := informers.NewSharedInformerFactory(getCMClient(), 0)
    
       cmInformer := fact.Core().V1().ConfigMaps()
       cmInformer.Informer().AddEventHandler(&CmHandler{})
    
       fact.Start(wait.NeverStop)
       select {}
    
    }
    

3.4 Secret

  • Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。

  • 命令集合

# tls 类型
kubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key

3.4.1 Secret的使用

  • 创建 Secret, shadow_secret.yml

    apiVersion: v1
    kind: Secret
    metadata:
      name: shadow-secret
    type: Opaque
    stringData:
      user: shadow-yd
      pass: a123456
    
  • 挂载 Secret,test_secret_pod.yml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: test-secret-pod
      namespace: default
    
    spec:
      selector:
        matchLabels:
          app: test-secret-pod
      template:
        metadata:
          labels:
            app: test-secret-pod
    
        spec:
          containers:
            - name: test-secret-pod
              command:
                - "sh"
                - "-c"
                - "sleep 3600"
              image: busybox
              imagePullPolicy: IfNotPresent
    
              env:
                - name: User
                  valueFrom:
                    secretKeyRef:
                      key: user
                      name: shadow-secret
    
              volumeMounts:
                - mountPath: /secret
                  name: shadow-secret
    
          volumes:
            - name: shadow-secret
              secret:
                secretName: shadow-secret
    
    

3.4.2 nginx 挂载auth验证文件

  • configmap_nginx.yml

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: nginxconf
    
    data:
      ngx: |
        server {
            listen       80;
            server_name  localhost;
            location / {
                auth_basic      "test auth";
                auth_basic_user_file /etc/nginx/basicauth;
                root   /usr/share/nginx/html;
                index  index.html index.htm;
            }
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   /usr/share/nginx/html;
            }
        }
    
  • 创建auth

    htpasswd -c auth shadow-yd
    htpasswd  auth lisi
    
  • 创建 secret-basic-auth

    kubectl create secret generic secret-basic-auth --from-file=auth
    
  • bauth_nginx_pod.yml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myngx
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
            - name: ngx
              image: nginx:1.18-alpine
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: nginxconf
                  mountPath: /etc/nginx/conf.d/default.conf
                  subPath: ngx
    
                - name: basicauth
                  mountPath: /etc/nginx/basicauth
                  subPath: auth
    
    
          volumes:
            - name: nginxconf
              configMap:
                defaultMode: 0655
                name: nginxconf
    
            - name: basicauth
              secret:
                secretName: secret-basic-auth
                defaultMode: 0655
    
    

3.5 Service

3.5.1 ClusterIp

apiVersion: v1
kind: Service
metadata:
  name: jtthink-ngx-svc
  namespace: myweb

spec:
  selector:
    app: jtthink-ngx
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80

  type: ClusterIP

3.5.2 NodePort

apiVersion: v1
kind: Service
metadata:
  name: jt-goapi-svc

spec:
  selector:
    app: jt-goapi
  type: NodePort
  ports:
    - port: 8080
      targetPort: 80
      nodePort: 31000

3.5.2 Headless ClusterIP

  • 访问svc会返回的所有的POD IP;

  • 场景

    • 需要自己判断使用哪个 POD IP;
    • StatefulSet 状态下,POD互相访问;
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  clusterIP: "None"
  ports:
    - port: 80
      targetPort: 80
  selector:  #service通过selector和pod建立关联
    app: nginx

3.6 HPA

3.6.1 metric services

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        #image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
        image: bitnami/metrics-server:0.4.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

3.7 kustomiz

官方文档

  • kustomize 是一个面向 Kubernetes 的命令行工具,用一种无模板、结构化的的方式为为声明式配置提供定制支持。
  • 面向 Kubernetes 的意思是 Kustomize 对 API 资源、Kubernetes 概念(例如名称、标签、命名空间等)、以及资源补丁是有支持的。Kustomize 是 DAM 声明式应用程序管理 的一个实现。

3.7.1 kustomiz 的结构化需围绕一个 kustomization.yaml

  • 假设我们在同级目录下已经创建了一个 service.yaml 以及 deployment.yaml

  • kustomization.yaml

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    # 加入一个默认设置的namespace
    namespace: default
    # 为所有资源加一个通用声明
    commonAnnotations:
      myname: shenyi
    images:
      - name: nginx
        # new 会覆盖现有的镜像配置
        newTag: 1.19-alpine
        newName: mysql
    # 引用同级目录的 yaml 资源
    resources:
      - service.yaml
      - deployment.yaml
    
    
    

3.7.2 overlays

官方文档

  • overlays 是一种引用依赖方式,通过依赖base目录,构建不同版本的yaml
  • overlyas 大致布局

image.png

  • 实验例子(脚本连接后补)

image.png

4. Flannel 网络介绍

  • 主机网关模式(host-gw)和 vxlan 模式,这里只介绍 vxlan 模式。

4.1 安装brctl工具

4.2 vxlan 模式

  • vxlan 其实就是通过包封装打入新的 vni id,作为路由选择手段之一。

  • 现在部署有两个node节点,节点ip为10.10.10.21610.10.10.217。本文将以这两个节点上的容器为例。介绍两个容器是如何通讯的。

    节点容器IP段容器IP
    10.10.10.21610.244.0.0/2410.244.0.6
    10.10.10.21710.244.1.0/2410.244.1.2
  • cni0 是本机的网桥(如交换机),flannel.1 网络设备建立 vxlan 协议;(veth 可以理解是一根网线,一边连接cni0,一边连接程序空间的虚拟网卡),真实出口的网络设备是 em1

    flannel

  • 证实流程的过程图 - 1

    • 查看某个pod的ip,可以看到eth0@if237

      image.png

    • 查看主机网络设备link,237网口的mac地址 52:c1:7b:a0:a2:ac,设备名字 veth49919d73

      # ip link
      236: vethf0bc28d9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT
      link/ether 0a:be:c4:9e:e5:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 2
      237: veth49919d73@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT
      link/ether 52:c1:7b:a0:a2:ac brd ff:ff:ff:ff:ff:ff link-netnsid 3
      239: veth665c293c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT
      link/ether d2:f9:14:27:a1:22 brd ff:ff:ff:ff:ff:ff link-netnsid 4
      
    • 可以看到 cni0 网桥上绑定了 veth49919d73 网络设备。

      # brctl show
      bridge name	bridge id		STP enabled	interfaces
      cni0		8000.3a777e49f865	no	       veth0daef9e3
                                                             veth2e987d92
                                                             veth3dd36f03
                                                             veth410ec2b8
                                                             veth49919d73
                                                             veth665c293c
                                                             veth709e550b
                                                             veth70ddd7d1
                                                             veth80dcdbe8
                                                             veth8da15924
                                                             veth986599c0
                                                             vetha4469fae
                                                             vethb01b1a4f
                                                             vethda4f51d8
                                                             vethe7c48ced
                                                             vethe94f1841
                                                             vethf0bc28d9
      docker0		8000.0242e8a98cf9	no
      
  • 证实流程的过程图 - 2

    • pod 与 pod 的路由, ping 通 10.244.4.67

      image.png

    • ip route 查看路由途径的设备,可以看到 10.244.4.0 走的是flannel.1这个网络设备

      # ip route
      default via 172.17.38.1 dev eth0 proto static metric 100
      10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
      10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
      10.244.4.0/24 via 10.244.4.0 dev flannel.1 onlink
      172.17.38.0/24 dev eth0 proto kernel scope link src 172.17.38.31 metric 100
      172.18.0.0/16 dev docker0 proto kernel scope link src 172.18.0.1
      
    • 查看flannel.1网络设备的邻居对应的mac地址

      # ip neigh show dev flannel.1
      10.244.3.0 lladdr a6:f3:b0:4b:56:63 PERMANENT
      10.244.4.0 lladdr d2:f0:78:f4:17:65 PERMANENT
      
    • 查看flannel.1 网络设备的记录的mac地址表,fdb表

      # bridge fdb show dev flannel.1
      d2:f0:78:f4:17:65 dst 192.168.6.102 self permanent
      a6:f3:b0:4b:56:63 dst 192.168.6.106 self permanent
      
    • 查看 192.168.6.102 的flannel.1 的mac地址, d2:f0:78:f4:17:65

      # ip link
      ...
      5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT
      link/ether d2:f0:78:f4:17:65 brd ff:ff:ff:ff:ff:ff
      ...