Kubernetes从零单排

0 阅读6分钟

Kubernetes

第一部分:Kubernetes基础篇

第一章:Kubernetes介绍

Kubernetes的设计哲学是声明式API + 控制器模式,用户声明期望状态,系统通过控制循环持续调整实际状态以达到期望。理解这一核心机制,就能掌握Kubernetes的精髓。

1.1 为什么会有Kubernetes
graph TB
    subgraph 传统部署时代
        A1["物理服务器"] -->|"直接部署"| B1["应用1"]
        A1 -->|"直接部署"| B2["应用2"]
        A1 -->|"资源争抢"| B3["应用3"]
        A1_note["资源利用率低<br>环境不一致<br>扩展困难"]
        A1_note -.-> A1
    end

    subgraph 虚拟化时代
        C1["物理服务器"] -->|"虚拟化"| D1["VM 1"]
        C1 -->|"虚拟化"| D2["VM 2"]
        D1 -->|"部署"| E1["应用1"]
        D2 -->|"部署"| E2["应用2"]
        C1_note["资源隔离好<br>但VM太重<br>启动慢<br>镜像大"]
        C1_note -.-> C1 
    end

    subgraph 容器化时代
        F1["物理服务器/VM"] -->|"容器引擎"| G1["Container 1<br>轻量级"]
        F1 -->|"容器引擎"| G2["Container 2<br>轻量级"]
        F1 -->|"容器引擎"| G3["Container 3<br>轻量级"]
        G1 --> H1["应用1"]
        G2 --> H2["应用2"]
        F1_note["轻量级<br>快速启动<br>但容器多了<br>如何管理?"]
        F1_note -.-> F1
    end

    subgraph 编排时代
        I1["Kubernetes"] -->|"编排管理"| J1["Pod 1"]
        I1 -->|"编排管理"| J2["Pod 2"]
        I1 -->|"编排管理"| J3["Pod 3"]
        I1 -->|"自动调度"| K1["自动扩缩容"]
        I1 -->|"自愈"| K2["故障恢复"]
        I1_note["容器编排<br>自动化管理<br>大规模运维"]
        I1_note -.-> I1
    end

容器编排的必要性:

痛点说明Kubernetes解决方案
容器数量爆炸微服务架构下容器成百上千自动化调度、管理
服务发现容器IP动态变化Service抽象、DNS服务发现
负载均衡流量如何分发到多个容器内置负载均衡
滚动更新不中断业务更新版本Deployment滚动更新策略
资源管理CPU/内存如何分配ResourceQuota/LimitRange
故障恢复容器挂了怎么办自动重启、重新调度
1.2 Kubernetes是什么
graph TB
    subgraph Kubernetes定位
        A[Kubernetes] --> B[容器编排平台<br/>Container Orchestration]
        A --> C[自动化运维平台<br/>Automated Operations]
        A --> D[云原生基础设施<br/>Cloud Native Infrastructure]
    end
    
    subgraph 核心能力
        B --> B1[部署管理]
        B --> B2[服务发现]
        B --> B3[负载均衡]
        B --> B4[存储编排]
        
        C --> C1[自动扩缩容]
        C --> C2[自我修复]
        C --> C3[配置管理]
        
        D --> D1[声明式API]
        D --> D2[可扩展架构]
    end
    
    style A fill:#e3f2fd
    style B fill:#c8e6c9
    style C fill:#fff3e0
    style D fill:#fce4ec

Kubernetes核心设计理念:

graph LR
    A[声明式 API<br/>Declarative] -->|用户声明期望状态| B[etcd 存储]
    B -->|控制器读取| C[控制器循环<br/>Control Loop]
    C -->|对比当前状态| D[实际状态]
    C -->|差异驱动| E[执行操作]
    E -->|达到| F[期望状态]
    F -->|持续监控| C
    
    style A fill:#e3f2fd
    style C fill:#fff3e0
1.3 Kubernetes发展历史
graph LR
    A[2014.6<br/>Google开源] --> B[2015.7<br/>v1.0发布]
    B --> C[2015.7<br/>CNCF成立]
    C --> D[2018<br/>成为容器编排标准]
    D --> E[2020+<br/>云原生基础设施]
    E --> F[2024+<br/>AI/边缘计算扩展]
    
    style A fill:#e3f2fd
    style B fill:#c8e6c9
    style C fill:#fff3e0
    style D fill:#fce4ec
    style E fill:#e8f5e9
1.4 Kubernetes核心概念
graph TB
    subgraph Kubernetes核心对象
        P[Pod<br/>最小调度单元] --> C1[Container]
        P --> C2[Container]
        P --> V[Volume]
        
        RC[ReplicationController<br/>副本控制器] -->|管理| P
        RS[ReplicaSet<br/>新一代副本集] -->|管理| P
        
        D[Deployment<br/>部署控制器] -->|管理| RS
        
        S[Service<br/>服务抽象] -->|暴露| P
        
        L[Label<br/>标签] -->|选择| P
        L -->|选择| S
        
        N[Node<br/>工作节点] -->|运行| P
    end
    
    style P fill:#e3f2fd
    style RC fill:#c8e6c9
    style RS fill:#c8e6c9
    style D fill:#fff3e0
    style S fill:#fce4ec
    style L fill:#e8f5e9
    style N fill:#f3e5f5

核心概念详解:

概念定义作用示例
Pod最小部署单元,包含一个或多个容器共享网络/存储,紧密耦合的容器组一个Nginx Pod包含Nginx容器+日志收集容器
ReplicationController确保指定数量的Pod副本始终运行故障时自动创建新Pod保持3个Nginx实例运行
ReplicaSetRC的升级版,支持基于selector的集合选择更灵活的标签选择匹配app=nginx,tier=frontend的Pod
Service为一组Pod提供统一访问入口服务发现、负载均衡暴露Nginx服务为ClusterIP
Label键值对标签,附加到对象上对象标识、选择器匹配app: nginx, version: v1
Node集群工作节点(物理机/VM)运行Pod的载体worker-node-1

第二章:Kubernetes的架构和部署

2.1 Kubernetes的架构和组件
graph TB
    subgraph Master节点 控制平面
        API[API Server<br/>集群入口] --> ETCD[etcd<br/>分布式存储]
        
        API --> SCH[Scheduler<br/>调度器]
        API --> CM[Controller Manager<br/>控制器管理器]
        
        SCH -->|调度决策| API
        CM -->|控制循环| API
        
        subgraph 控制器
            CM --> RC[Replication Controller]
            CM --> DEP[Deployment Controller]
            CM --> END[Endpoint Controller]
            CM --> NS[Namespace Controller]
        end
    end
    
    subgraph Node节点 工作节点
        KUBELET[Kubelet<br/>节点代理] -->|汇报状态| API
        KUBELET -->|管理| POD1[Pod]
        KUBELET -->|管理| POD2[Pod]
        
        KPROXY[kube-proxy<br/>网络代理] -->|维护| IPTABLES[iptables/ipvs<br/>服务规则]
        IPTABLES -->|负载均衡| POD1
        IPTABLES -->|负载均衡| POD2
        
        RUNTIME[Container Runtime<br/>容器运行时<br/>docker/containerd] -->|创建/管理| POD1
        RUNTIME -->|创建/管理| POD2
    end
    
    CLI[kubectl] -->|REST API| API
    
    style API fill:#e3f2fd
    style ETCD fill:#c8e6c9
    style SCH fill:#fff3e0
    style CM fill:#fff3e0
    style KUBELET fill:#fce4ec
    style KPROXY fill:#e8f5e9
    style RUNTIME fill:#f3e5f5

组件交互流程:

sequenceDiagram
    participant U as 用户
    participant K as kubectl
    participant A as API Server
    participant E as etcd
    participant C as Controller Manager
    participant S as Scheduler
    participant N as Kubelet
    participant R as Container Runtime
    
    U->>K: kubectl apply -f deployment.yaml
    K->>A: POST /apis/apps/v1/deployments
    A->>E: 写入Deployment对象
    E-->>A: 确认存储
    
    A-->>C: Watch通知:新Deployment
    C->>C: 创建ReplicaSet
    C->>A: POST ReplicaSet
    A->>E: 存储ReplicaSet
    
    A-->>S: Watch通知:新Pod需要调度
    S->>E: 读取节点信息
    S->>S: 调度算法选择最优Node
    S->>A: 绑定Pod到Node
    A->>E: 更新Pod绑定信息
    
    A-->>N: Watch通知:有新Pod分配到本节点
    N->>R: 创建容器
    R-->>N: 容器创建完成
    N->>A: 汇报Pod状态Running
    A->>E: 更新Pod状态
    
    A-->>U: Deployment创建完成
    
    
2.2 部署Kubernetes
graph TB
    subgraph 部署方式
        A[二进制部署] -->|手动编译| B[生产环境<br/>深度定制]
        C[kubeadm] -->|官方工具| D[学习/测试<br/>快速部署]
        E[云厂商托管] -->|EKS/ACK/GKE| F[生产环境<br/>免运维]
        G[Minikube/kind] -->|单机部署| H[本地开发<br/>快速体验]
    end
    
    style B fill:#ffcdd2
    style D fill:#c8e6c9
    style F fill:#e3f2fd
    style H fill:#fff3e0

kubeadm部署流程:

sequenceDiagram
    participant A as Admin
    participant M as Master节点
    participant N1 as Node1
    participant N2 as Node2
    
    A->>M: 1. kubeadm init
    M->>M: 1.1 拉取镜像
    M->>M: 1.2 启动kubelet
    M->>M: 1.3 启动静态Pod<br/>(API Server/Scheduler/CM/etcd)
    M->>M: 1.4 配置kubeconfig
    M-->>A: 1.5 返回join token
    
    A->>N1: 2. kubeadm join --token xxx
    N1->>M: 2.1 向API Server注册
    M-->>N1: 2.2 返回证书
    N1->>N1: 2.3 启动kubelet
    N1->>N1: 2.4 启动kube-proxy
    
    A->>N2: 3. kubeadm join --token xxx
    N2->>M: 3.1 注册
    M-->>N2: 3.2 返回证书
    
    A->>M: 4. kubectl get nodes
    M-->>A: 4.1 返回节点列表<br/>master Ready<br/>node1 Ready<br/>node2 Ready
    
    
2.3 安装Kubernetes扩展插件
graph TB
    subgraph 核心插件
        A[CoreDNS] -->|集群DNS| B[服务发现]
        C[Cluster Monitoring] -->|Prometheus/Grafana| D[监控告警]
        E[Cluster Logging] -->|EFK/PLG| F[日志收集]
        G[Kube UI/Dashboard] -->|Web界面| H[可视化管理]
    end
    
    style A fill:#e3f2fd
    style C fill:#c8e6c9
    style E fill:#fff3e0
    style G fill:#fce4ec

第三章:Kubernetes快速入门

3.1 示例应用Guestbook
graph TB
    subgraph Guestbook应用架构
        FE[Frontend<br/>PHP应用<br/>3个副本] -->|读写| REDIS[Redis Master<br/>单实例]
        REDIS -->|复制| RS[Redis Slave<br/>2个副本]
        
        SVC_FE[frontend-service<br/>LoadBalancer] -->|暴露| FE
        SVC_REDIS[redis-master-service<br/>ClusterIP] -->|暴露| REDIS
        SVC_SLAVE[redis-slave-service<br/>ClusterIP] -->|暴露| RS
    end
    
    style FE fill:#e3f2fd
    style REDIS fill:#c8e6c9
    style RS fill:#fff3e0
    style SVC_FE fill:#fce4ec

第四章:Pod详解

4.1 国际惯例的Hello World
# hello-world-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: hello-world
  labels:
    app: hello
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    ports:
    - containerPort: 80
# 创建Pod
$ kubectl apply -f hello-world-pod.yaml
pod/hello-world created

# 查看Pod
$ kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE
hello-world   1/1     Running   0          10s   10.244.1.15   node-1

# 查看详情
$ kubectl describe pod hello-world
Name:         hello-world
Namespace:    default
Node:         node-1/192.168.1.101
Status:       Running
IP:           10.244.1.15
Containers:
  nginx:
    Image:          nginx:1.21
    Port:           80/TCP
    State:          Running
    Ready:          True
4.2 Pod的基本操作
graph LR
    A[Pod生命周期] -->|kubectl create| B[Pending]
    B -->|镜像拉取完成| C[ContainerCreating]
    C -->|容器启动| D[Running]
    D -->|kubectl delete| E[Terminating]
    E -->|清理完成| F[Succeeded/Failed]
    
    D -->|容器崩溃| G[CrashLoopBackOff]
    G -->|重启策略| D
    
    style B fill:#fff3e0
    style D fill:#c8e6c9
    style E fill:#ffcdd2
    style G fill:#ffcdd2

Pod操作命令:

# 创建
kubectl apply -f pod.yaml
kubectl run nginx --image=nginx:1.21

# 查询
kubectl get pods
kubectl get pods -o yaml  # 查看完整YAML
kubectl get pods --show-labels
kubectl describe pod <pod-name>

# 删除
kubectl delete pod <pod-name>
kubectl delete -f pod.yaml

# 更新(Pod不可直接更新,需删除重建或使用Deployment)
kubectl replace -f pod.yaml

# 进入容器
kubectl exec -it <pod-name> -- /bin/bash

# 查看日志
kubectl logs <pod-name>
kubectl logs <pod-name> -f  # 实时跟踪
kubectl logs <pod-name> --previous  # 查看上次崩溃日志
4.3 Pod与容器
graph TB
    subgraph Pod内部结构
        P[Pod<br/>共享命名空间] --> N[Network Namespace<br/>共享IP和端口空间]
        P --> UTS[UTS Namespace<br/>共享主机名]
        P --> IPC[IPC Namespace<br/>共享内存/信号量]
        
        N --> C1[Container 1<br/>nginx]
        N --> C2[Container 2<br/>log-collector]
        
        V[Volume<br/>共享存储] --> C1
        V --> C2
    end
    
    style P fill:#e3f2fd
    style N fill:#c8e6c9
    style C1 fill:#fff3e0
    style C2 fill:#fce4ec
    style V fill:#e8f5e9

多容器Pod示例:

apiVersion: v1
kind: Pod
metadata:
  name: web-with-logging
spec:
  containers:
  # 主容器:Web应用
  - name: web
    image: nginx:1.21
    ports:
    - containerPort: 80
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx
  
  # 边车容器:日志收集
  - name: log-collector
    image: fluentd:v1.14
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx
    # 读取共享目录的日志并发送到ES
  
  # 初始化容器:启动前执行
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup myservice; do sleep 2; done']
  
  volumes:
  - name: shared-logs
    emptyDir: {}  # 临时共享目录
4.4 Pod的网络

image.png

Kubernetes网络原则:

原则说明实现
每个Pod有独立IPPod如同VM或物理机CNI插件分配IP
Pod间直接通信无需NATOverlay网络或路由
节点与Pod可通信节点能访问所有Pod路由配置
Pod与Service可通信通过ClusterIP访问kube-proxy/iptables
4.5 Pod的重启策略
apiVersion: v1
kind: Pod
metadata:
  name: restart-policy-demo
spec:
  restartPolicy: Always  # Always | OnFailure | Never
  containers:
  - name: demo
    image: busybox
    command: ["sh", "-c", "sleep 10; exit 1"]  # 模拟崩溃
策略说明适用场景
Always无论退出码如何都重启(默认)长期运行的服务
OnFailure退出码非0时重启Job/CronJob
Never不重启一次性任务调试
4.6 Pod的状态和生命周期
graph TB
    subgraph Pod生命周期
        A[Pending] -->|调度完成| B[ContainerCreating]
        B -->|容器启动| C[Running]
        C -->|所有容器终止| D[Succeeded]
        C -->|有容器失败| E[Failed]
        C -->|容器崩溃重启| C
        
        A -->|调度失败| F[Failed]
        A -->|镜像拉取失败| G[ImagePullBackOff]
        A -->|资源不足| H[Pending持续]
    end
    
    style A fill:#fff3e0
    style C fill:#c8e6c9
    style D fill:#e3f2fd
    style E fill:#ffcdd2
    style G fill:#ffcdd2

生命周期钩子:

apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle
    image: nginx:1.21
    lifecycle:
      postStart:  # 启动后执行
        exec:
          command: ["/bin/sh", "-c", "echo 'Started' >> /var/log/nginx/start.log"]
      preStop:    # 停止前执行(优雅关闭)
        exec:
          command: ["/bin/sh", "-c", "nginx -s quit; sleep 5"]
4.7 自定义检查Pod
apiVersion: v1
kind: Pod
metadata:
  name: health-check
spec:
  containers:
  - name: app
    image: myapp:v1
    ports:
    - containerPort: 8080
    
    # 存活检查:失败则重启容器
    livenessProbe:
      httpGet:
        path: /health
        port: 8080
      initialDelaySeconds: 30  # 启动后等待30秒开始检查
      periodSeconds: 10          # 每10秒检查一次
      timeoutSeconds: 5          # 超时5秒
      failureThreshold: 3        # 连续失败3次才判定不健康
    
    # 就绪检查:失败则从Service端点移除
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
    
    # 启动检查(1.16+):防止慢启动容器被误判
    startupProbe:
      httpGet:
        path: /health
        port: 8080
      failureThreshold: 30       # 允许30次失败(300秒)
      periodSeconds: 10

探针类型对比:

探针类型作用失败行为适用场景
Liveness检查容器是否存活重启容器死锁、死循环等应用级故障
Readiness检查容器是否可接收流量从Service移除依赖未就绪(如DB连接中)
Startup检查应用是否启动完成杀死容器慢启动应用,保护Liveness
4.8 调度Pod
graph TB
    subgraph 调度流程
        A[待调度Pod] -->|1. 过滤阶段| B[节点筛选]
        B -->|资源充足| C[Node 1]
        B -->|资源充足| D[Node 2]
        B -->|资源不足| E[Node 3排除]
        
        C -->|2. 打分阶段| F[优先级计算]
        D -->|2. 打分阶段| F
        
        F -->|分数最高| G[选择Node 1]
    end
    
    style A fill:#e3f2fd
    style C fill:#c8e6c9
    style D fill:#fff3e0
    style G fill:#c8e6c9

调度约束示例:

apiVersion: v1
kind: Pod
metadata:
  name: scheduling-demo
spec:
  # 节点亲和性:优先选择有ssd标签的节点
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values: ["ssd"]
    
    # Pod反亲和性:避免与同应用的Pod在同一节点
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["web"]
        topologyKey: kubernetes.io/hostname
  
  # 污点容忍:允许调度到带taint的节点
  tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "web"
    effect: "NoSchedule"
  
  # 节点选择器:硬约束
  nodeSelector:
    environment: production

第五章:Replication Controller

graph TB
    subgraph RC工作原理
        A[ReplicationController<br/>replicas=3] -->|创建/维护| P1[Pod 1]
        A -->|创建/维护| P2[Pod 2]
        A -->|创建/维护| P3[Pod 3]
        
        P4[Pod 4崩溃] -.->|检测到| A
        A -->|自动创建| P5[新Pod 4']
    end
    
    style A fill:#e3f2fd
    style P1 fill:#c8e6c9
    style P2 fill:#c8e6c9
    style P3 fill:#c8e6c9
    style P5 fill:#fff3e0
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
spec:
  replicas: 3  # 期望副本数
  selector:
    app: nginx  # 选择器,匹配Pod标签
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

第六章:Service详解

6.1 Service代理Pod
graph TB
    subgraph Service架构
        S[Service<br/>ClusterIP: 10.96.100.100] -->|Endpoints| P1[Pod 1<br/>10.244.1.2:80]
        S -->|Endpoints| P2[Pod 2<br/>10.244.1.3:80]
        S -->|Endpoints| P3[Pod 3<br/>10.244.2.2:80]
        
        C[Client Pod] -->|访问<br/>curl http://nginx-svc| S
        S -->|iptables/ipvs<br/>负载均衡| P1
        S -->|iptables/ipvs<br/>负载均衡| P2
        S -->|iptables/ipvs<br/>负载均衡| P3
    end
    
    style S fill:#e3f2fd
    style P1 fill:#c8e6c9
    style P2 fill:#c8e6c9
    style P3 fill:#c8e6c9
    style C fill:#fff3e0
6.2 Service的类型
graph TB
    subgraph Service类型
        A[ClusterIP<br/>默认] -->|集群内部访问| B[10.96.x.x]
        C[NodePort] -->|节点端口暴露| D[NodeIP:30080]
        E[LoadBalancer] -->|云厂商LB| F[公网IP]
        G[ExternalName] -->|CNAME记录| H[外部域名]
        I[Headless] -->|直接返回Pod IP| J[DNS解析为所有Pod IP]
    end
    
    style A fill:#c8e6c9
    style C fill:#fff3e0
    style E fill:#e3f2fd
    style G fill:#fce4ec
    style I fill:#e8f5e9

Service类型对比:

类型访问方式适用场景示例
ClusterIP集群内部IP微服务间通信nginx.default.svc.cluster.local
NodePort<NodeIP>:<Port>开发测试、小型集群192.168.1.101:30080
LoadBalancer云厂商分配的公网IP生产环境对外暴露203.0.113.1:80
ExternalNameDNS CNAME映射外部服务database.example.com
Headless直接返回Pod IP有状态服务(如Redis集群)StatefulSet使用
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort  # ClusterIP | NodePort | LoadBalancer | ExternalName
  selector:
    app: nginx
  ports:
  - port: 80          # Service端口
    targetPort: 80    # Pod端口
    nodePort: 30080   # 节点端口(NodePort时指定,范围30000-32767)
  sessionAffinity: ClientIP  # 会话保持
6.4 服务发现

image.png

环境变量服务发现:

# Pod启动时,Kubernetes自动注入环境变量
$ kubectl exec nginx-pod -- env | grep NGINX
NGINX_SERVICE_SERVICE_HOST=10.96.100.100
NGINX_SERVICE_SERVICE_PORT=80
NGINX_SERVICE_PORT=tcp://10.96.100.100:80
NGINX_SERVICE_PORT_80_TCP=tcp://10.96.100.100:80
NGINX_SERVICE_PORT_80_TCP_ADDR=10.96.100.100
NGINX_SERVICE_PORT_80_TCP_PORT=80
6.5 发布Service
apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080  # 指定节点端口,不指定则自动分配

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"  # AWS NLB
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80

第七章:数据卷

7.1 Kubernetes数据卷
graph TB
    subgraph 数据卷类型
        A[emptyDir] -->|临时目录<br/>Pod删除即消失| B[缓存/临时文件]
        C[hostPath] -->|挂载宿主机路径| D[访问宿主机文件]
        E[NFS] -->|网络文件系统| F[共享存储]
        G[PersistentVolume] -->|持久化存储| H[长期保存数据]
    end
    
    style A fill:#fff3e0
    style C fill:#ffcdd2
    style E fill:#c8e6c9
    style G fill:#e3f2fd
7.2 本地数据卷
apiVersion: v1
kind: Pod
metadata:
  name: emptydir-demo
spec:
  containers:
  - name: writer
    image: busybox
    command: ["sh", "-c", "while true; do echo $(date) >> /data/log.txt; sleep 5; done"]
    volumeMounts:
    - name: shared-data
      mountPath: /data
  
  - name: reader
    image: busybox
    command: ["sh", "-c", "tail -f /data/log.txt"]
    volumeMounts:
    - name: shared-data
      mountPath: /data
  
  volumes:
  - name: shared-data
    emptyDir: {}  # 容器间共享的临时目录

---
apiVersion: v1
kind: Pod
metadata:
  name: hostpath-demo
spec:
  containers:
  - name: app
    image: nginx:1.21
    volumeMounts:
    - name: host-logs
      mountPath: /var/log/nginx
  volumes:
  - name: host-logs
    hostPath:
      path: /var/log/nginx  # 挂载宿主机目录(安全风险,生产慎用)
      type: DirectoryOrCreate
7.3 网络数据卷
apiVersion: v1
kind: Pod
metadata:
  name: nfs-demo
spec:
  containers:
  - name: app
    image: nginx:1.21
    volumeMounts:
    - name: nfs-volume
      mountPath: /usr/share/nginx/html
  volumes:
  - name: nfs-volume
    nfs:
      server: 192.168.1.100    # NFS服务器地址
      path: /data/nfs/nginx    # NFS共享路径
7.4 PersistentVolume和PersistentVolumeClaim
graph TB
    subgraph PV/PVC架构
        A[StorageClass<br/>存储类<br/>动态供给模板] -->|创建| B[PersistentVolume<br/>持久卷<br/>管理员/动态供给]
        
        C[PersistentVolumeClaim<br/>持久卷声明<br/>用户申请] -->|绑定| B
        
        D[Pod] -->|使用| C
        C -->|挂载| B
        
        E[管理员] -->|静态创建| B
        F[动态供给控制器] -->|根据StorageClass| B
    end
    
    style A fill:#e3f2fd
    style B fill:#c8e6c9
    style C fill:#fff3e0
    style D fill:#fce4ec
# 静态PV(管理员创建)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-001
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany  # RWO/ROX/RWX
  persistentVolumeReclaimPolicy: Retain  # Retain/Recycle/Delete
  storageClassName: nfs-slow
  nfs:
    server: 192.168.1.100
    path: /data/nfs/pv001

---
# PVC(用户申请)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-web
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi  # 申请5G,会匹配到pv-nfs-001
  storageClassName: nfs-slow

---
# Pod使用PVC
apiVersion: v1
kind: Pod
metadata:
  name: web-with-pvc
spec:
  containers:
  - name: web
    image: nginx:1.21
    volumeMounts:
    - name: web-data
      mountPath: /usr/share/nginx/html
  volumes:
  - name: web-data
    persistentVolumeClaim:
      claimName: pvc-web  # 引用PVC

第八章:访问Kubernetes API

graph TB
    subgraph API访问方式
        A[kubectl命令行] -->|REST API| API[API Server]
        B[SDK客户端] -->|client-go| API
        C[curl直接调用] -->|HTTPS + 证书| API
        D[Dashboard] -->|Web界面| API
    end
    
    API -->|认证| AUTH[Authentication<br/>证书/Token/Webhook]
    AUTH -->|授权| RBAC[Authorization<br/>RBAC/ABAC]
    RBAC -->|准入控制| ADM[Admission Controller<br/>校验/修改请求]
    
    style API fill:#e3f2fd
    style AUTH fill:#c8e6c9
    style RBAC fill:#fff3e0
    style ADM fill:#fce4ec
# 使用kubectl操作
$ kubectl get pods
$ kubectl create -f deployment.yaml
$ kubectl delete service nginx

# 使用curl直接访问API(需配置证书)
$ curl -k \
  --cert /path/to/client.crt \
  --key /path/to/client.key \
  https://192.168.1.100:6443/api/v1/namespaces/default/pods

# 使用kubectl proxy简化
$ kubectl proxy --port=8080 &
$ curl http://localhost:8080/api/v1/namespaces/default/pods

第二部分:Kubernetes高级篇

第九章:Kubernetes网络

9.1 Docker网络模型
graph TB
    subgraph Docker默认网络
        A[Docker0网桥<br/>172.17.0.1/16] -->|veth pair| C1[Container 1<br/>172.17.0.2]
        A -->|veth pair| C2[Container 2<br/>172.17.0.3]
        
        C1 -->|NAT| ETH[宿主机网卡<br/>192.168.1.100]
        C2 -->|NAT| ETH
    end
    
    style A fill:#e3f2fd
    style C1 fill:#c8e6c9
    style C2 fill:#c8e6c9
9.2 Kubernetes网络模型
graph TB
    subgraph K8s网络要求
        P1[Pod 1<br/>10.244.1.2] -->|直接通信<br/>无需NAT| P2[Pod 2<br/>10.244.2.2]
        P1 -->|直接通信| P3[Pod 3<br/>10.244.1.3]
        
        N1[Node 1<br/>192.168.1.101] -->|路由| N2[Node 2<br/>192.168.1.102]
        
        P1 -.->|跨节点直接访问| P2
    end
    
    style P1 fill:#e3f2fd
    style P2 fill:#c8e6c9
    style P3 fill:#fff3e0
9.4 Pod间通信:Flannel实现
graph TB
    subgraph Flannel Overlay网络
        P1[Pod 1<br/>10.244.1.2] -->|cni0<br/>网桥| F1[flannel.1<br/>VXLAN设备]
        F1 -->|VXLAN封装| ETH1[eth0<br/>192.168.1.101]
        
        ETH1 -->|UDP 8472| ETH2[eth0<br/>192.168.1.102]
        
        ETH2 -->|VXLAN解封装| F2[flannel.1]
        F2 -->|cni0| P2[Pod 2<br/>10.244.2.2]
    end
    
    style P1 fill:#e3f2fd
    style F1 fill:#c8e6c9
    style P2 fill:#fff3e0
    style F2 fill:#c8e6c9

Flannel后端模式对比:

模式原理性能适用场景
VXLANUDP封装,默认较好大多数场景
Host-GW路由表,无封装最好二层互通环境
UDP用户态封装较差调试使用
Alloc仅分配子网-配合云厂商路由
9.5 Service到Pod通信
graph TB
    subgraph kube-proxy模式
        subgraph "userspace模式已废弃"
            A1[Service IP] -->|iptables转发| P1[kube-proxy<br/>用户态代理]
            P1 -->|轮询| POD1[Pod]
            P1 -->|轮询| POD2[Pod]
        end
        
        subgraph iptables模式默认
            A2[Service IP] -->|iptables规则| POD3[Pod]
            A2 -->|iptables规则| POD4[Pod]
        end
        
        subgraph ipvs模式推荐
            A3[Service IP] -->|ipvs负载均衡| POD5[Pod]
            A3 -->|ipvs负载均衡| POD6[Pod]
            A3 -->|ipvs负载均衡| POD7[Pod]
        end
    end
    
    style P1 fill:#ffcdd2
    style A2 fill:#fff3e0
    style A3 fill:#c8e6c9

第十章:Kubernetes安全

10.1 Kubernetes安全原则
graph TB
    subgraph 安全层次
        A[传输安全] -->|TLS| B[API Server HTTPS]
        C[认证] -->|身份识别| D[证书/Token/OIDC]
        E[授权] -->|权限控制| F[RBAC]
        G[准入控制] -->|请求校验| H[LimitRanger/ResourceQuota]
        I[容器安全] -->|运行时保护| J[SecurityContext/AppArmor]
        K[网络安全] -->|隔离| L[NetworkPolicy]
    end
    
    style A fill:#e3f2fd
    style C fill:#c8e6c9
    style E fill:#fff3e0
    style G fill:#fce4ec
    style I fill:#e8f5e9
    style K fill:#f3e5f5
10.2 Kubernetes API的安全访问
# RBAC示例:创建只读用户
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]  # 只读权限

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: jane  # 用户名
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
10.3 Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-sa
  namespace: default
automountServiceAccountToken: true  # 自动挂载Token

---
apiVersion: v1
kind: Pod
metadata:
  name: app-with-sa
spec:
  serviceAccountName: app-sa  # 使用自定义SA
  containers:
  - name: app
    image: myapp:v1
10.4 容器安全
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000        # 以非root运行
    runAsGroup: 3000
    fsGroup: 2000          # 卷挂载的组所有权
    seccompProfile:
      type: RuntimeDefault  # 使用默认seccomp配置
  
  containers:
  - name: app
    image: nginx:1.21
    securityContext:
      allowPrivilegeEscalation: false  # 禁止特权提升
      readOnlyRootFilesystem: true       # 根文件系统只读
      capabilities:
        drop: ["ALL"]                   # 丢弃所有能力
        add: ["NET_BIND_SERVICE"]       # 仅添加需要的
      resources:
        limits:
          memory: "256Mi"
          cpu: "500m"

第十一章:Kubernetes资源管理

graph TB
    subgraph 资源管理模型
        A[ResourceQuota<br/>命名空间级别] -->|限制| B[总CPU/内存]
        A -->|限制| C[Pod数量]
        A -->|限制| D[Service数量]
        
        E[LimitRange<br/>默认限制] -->|限制| F[单个Pod/容器]
        E -->|限制| G[默认请求/限制]
        
        H[Pod] -->|requests| I[调度时保证]
        H -->|limits| J[运行时上限]
    end
    
    style A fill:#e3f2fd
    style E fill:#c8e6c9
    style H fill:#fff3e0
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: production
spec:
  hard:
    requests.cpu: "20"        # 命名空间CPU请求总和
    requests.memory: 100Gi     # 内存请求总和
    limits.cpu: "40"          # CPU限制总和
    limits.memory: 200Gi      # 内存限制总和
    pods: "100"               # Pod数量上限
    services: "10"            # Service数量上限

---
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: production
spec:
  limits:
  - default:                # 默认限制(未指定时使用)
      cpu: "500m"
      memory: "512Mi"
    defaultRequest:           # 默认请求
      cpu: "100m"
      memory: "128Mi"
    max:                      # 最大允许
      cpu: "2"
      memory: "4Gi"
    min:                      # 最小要求
      cpu: "50m"
      memory: "64Mi"
    type: Container

---
apiVersion: v1
kind: Pod
metadata:
  name: resource-demo
spec:
  containers:
  - name: app
    image: nginx:1.21
    resources:
      requests:               # 调度依据,保证资源
        memory: "128Mi"
        cpu: "100m"
      limits:                 # 运行时上限
        memory: "256Mi"       # 超过会被OOMKilled
        cpu: "200m"           # CPU可压缩,超过会节流

QoS等级:

QoS等级条件驱逐优先级说明
Guaranteed所有容器都设置了limits=requests最低最优先保障
Burstable至少一个容器设置了requests大多数Pod
BestEffort未设置任何requests/limits最高最先被驱逐

第十二章:管理和运维Kubernetes

12.1 Daemon Pod
graph TB
    subgraph DaemonSet
        DS[DaemonSet] -->|每个节点一个| N1[Node 1]
        DS -->|每个节点一个| N2[Node 2]
        DS -->|每个节点一个| N3[Node 3]
        
        N1 --> P1[Pod<br/>日志收集]
        N2 --> P2[Pod<br/>日志收集]
        N3 --> P3[Pod<br/>日志收集]
    end
    
    style DS fill:#e3f2fd
    style P1 fill:#c8e6c9
    style P2 fill:#c8e6c9
    style P3 fill:#c8e6c9
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule  # 容忍Master节点污点
      containers:
      - name: fluentd-elasticsearch
        image: fluentd:v1.14
        resources:
          limits:
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      volumes:
      - name: varlog
        hostPath:
          path: /var/log  # 收集宿主机日志
12.2 Kubernetes的高可用性
graph TB
    subgraph 高可用架构
        LB[负载均衡器<br/>Keepalived/HAProxy] -->|VIP| M1[Master 1<br/>API Server]
        LB -->|VIP| M2[Master 2<br/>API Server]
        LB -->|VIP| M3[Master 3<br/>API Server]
        
        M1 -->|etcd集群| E1[etcd 1]
        M2 -->|etcd集群| E2[etcd 2]
        M3 -->|etcd集群| E3[etcd 3]
        
        M1 -->|调度/控制| N1[Node 1]
        M2 -->|调度/控制| N2[Node 2]
        M3 -->|调度/控制| N3[Node 3]
    end
    
    style LB fill:#e3f2fd
    style M1 fill:#c8e6c9
    style M2 fill:#c8e6c9
    style M3 fill:#c8e6c9
    style E1 fill:#fff3e0
    style E2 fill:#fff3e0
    style E3 fill:#fff3e0
12.3 平台监控
# Prometheus + Grafana 监控部署
apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  selector:
    app: prometheus
  ports:
  - port: 9090

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - name: prometheus
        image: prom/prometheus:v2.40
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: config
          mountPath: /etc/prometheus
      volumes:
      - name: config
        configMap:
          name: prometheus-config

第三部分:Kubernetes生态篇

第十三章:CoreOS

graph TB
    subgraph CoreOS生态
        A[CoreOS] -->|容器优化OS| B[轻量级<br/>只读文件系统]
        B --> C[自动更新<br/>双分区A/B]
        
        D[Etcd] -->|分布式KV| E[Kubernetes数据存储]
        F[Flannel] -->|Overlay网络| G[Pod网络互通]
        H[Rocket] -->|容器运行时| I[OCI标准]
        J[Systemd] -->|系统管理| K[服务/容器管理]
        L[Fleet] -->|集群调度| M[跨节点服务管理]
    end
    
    style A fill:#e3f2fd
    style D fill:#c8e6c9
    style F fill:#fff3e0
    style H fill:#fce4ec

第十四章:Etcd

graph TB
    subgraph Etcd架构
        A[Etcd集群<br/>Raft协议] -->|Leader| B[Node 1]
        A -->|Follower| C[Node 2]
        A -->|Follower| D[Node 3]
        
        B -->|日志复制| C
        B -->|日志复制| D
        
        E[API Server] -->|读写| B
        E -->|读| C
        E -->|读| D
    end
    
    style A fill:#e3f2fd
    style B fill:#c8e6c9
    style C fill:#fff3e0
    style D fill:#fff3e0

Etcd核心特性:

特性说明
Raft共识算法Leader选举、日志复制,保证强一致性
Watch机制监听数据变化,Kubernetes事件通知基础
TTL机制键值过期,用于服务注册发现
事务支持多键原子操作
版本控制每个键有版本号,支持历史查询

第十五章:Mesos

graph TB
    subgraph Mesos架构
        A[Mesos Master<br/>ZooKeeper选举] -->|资源分配| B[Mesos Agent 1]
        A -->|资源分配| C[Mesos Agent 2]
        
        D[Framework<br/>Marathon/Chronos] -->|注册| A
        D -->|接收资源offer| B
        D -->|启动任务| B
        
        E[Kubernetes] -->|作为Framework| A
    end
    
    style A fill:#e3f2fd
    style D fill:#c8e6c9
    style E fill:#fff3e0

总结:Kubernetes核心对象关系图

graph TB
    subgraph Kubernetes对象全景
        NS[Namespace] -->|包含| DEP[Deployment]
        NS -->|包含| SVC[Service]
        NS -->|包含| CM[ConfigMap/Secret]
        
        DEP -->|管理| RS[ReplicaSet]
        RS -->|管理| POD[Pod]
        
        POD -->|挂载| PVC[PersistentVolumeClaim]
        PVC -->|绑定| PV[PersistentVolume]
        
        POD -->|使用| CM
        POD -->|使用| SEC[Secret]
        
        SVC -->|暴露| POD
        ING[Ingress] -->|路由到| SVC
        
        SA[ServiceAccount] -->|身份| POD
        RBAC[Role/ClusterRole] -->|授权| SA
        
        HPA[HorizontalPodAutoscaler] -->|扩缩容| DEP
    end
    
    style NS fill:#e3f2fd
    style DEP fill:#c8e6c9
    style POD fill:#fff3e0
    style SVC fill:#fce4ec
    style PVC fill:#e8f5e9
    style ING fill:#f3e5f5

kubectl常用命令速查:

# 基础操作
kubectl get <resource>                    # 查看资源
kubectl describe <resource> <name>       # 查看详情
kubectl create -f <file>                  # 创建资源
kubectl apply -f <file>                   # 应用/更新配置
kubectl delete -f <file>                  # 删除资源
kubectl edit <resource> <name>           # 编辑资源

# Pod操作
kubectl logs <pod> [-f]                  # 查看日志
kubectl exec -it <pod> -- <cmd>          # 进入容器
kubectl port-forward <pod> 8080:80        # 端口转发

# 高级操作
kubectl rollout status deployment/<name>  # 查看滚动更新状态
kubectl rollout history deployment/<name> # 查看更新历史
kubectl rollout undo deployment/<name>    # 回滚
kubectl scale deployment <name> --replicas=5  # 扩缩容

# 调试
kubectl get events --sort-by='.lastTimestamp'  # 查看事件
kubectl top nodes/pods                     # 资源使用
kubectl get pods -o yaml --export         # 导出配置