系统设计实战 194:容器编排系统(Kubernetes)

1 阅读10分钟

🚀 系统设计实战 194:容器编排系统(Kubernetes)

摘要:本文深入剖析系统的核心架构关键算法工程实践,提供完整的设计方案和面试要点。

你是否想过,设计容器编排系统背后的技术挑战有多复杂?

1. 系统概述

1.1 业务背景

容器编排系统自动化容器的部署、扩缩容、服务发现和负载均衡。提供声明式配置、自愈能力和资源管理,是现代云原生应用的基础设施。

1.2 核心功能

  • 容器调度:Pod调度、资源分配、亲和性规则
  • 服务发现:DNS解析、负载均衡、服务网格
  • 自动扩缩容:HPA、VPA、集群自动扩缩容
  • 滚动更新:零停机部署、回滚机制
  • 资源管理:CPU、内存、存储的配额和限制

1.3 技术挑战

  • 调度算法:多维度资源的最优调度
  • 状态管理:分布式系统的一致性保证
  • 网络管理:跨节点容器通信
  • 存储编排:持久化存储的动态分配
  • 安全隔离:多租户环境的安全保障

2. 架构设计

2.1 整体架构

┌─────────────────────────────────────────────────────────────┐
│                  容器编排系统架构                            │
├─────────────────────────────────────────────────────────────┤
│  Control Plane                                              │
│  ┌─────────────┐ ┌─────────────┐ ┌─────────────┐           │
│  │ API Server  │ │ Scheduler   │ │ Controller  │           │
│  │ etcd        │ │ Manager     │ │ Manager     │           │
│  └─────────────┘ └─────────────┘ └─────────────┘           │
├─────────────────────────────────────────────────────────────┤
│  Worker Nodes                                               │
│  ┌─────────────┐ ┌─────────────┐ ┌─────────────┐           │
│  │ Node 1      │ │ Node 2      │ │ Node N      │           │
│  │ - kubelet   │ │ - kubelet   │ │ - kubelet   │           │
│  │ - kube-proxy│ │ - kube-proxy│ │ - kube-proxy│           │
│  │ - Container │ │ - Container │ │ - Container │           │
│  │   Runtime   │ │   Runtime   │ │   Runtime   │           │
│  └─────────────┘ └─────────────┘ └─────────────┘           │
└─────────────────────────────────────────────────────────────┘

3. 核心组件设计

3.1 API服务器

// 时间复杂度:O(N),空间复杂度:O(1)

type APIServer struct {
    storage      Storage
    authenticator Authenticator
    authorizer   Authorizer
    validator    Validator
    handlers     map[string]ResourceHandler
    watchers     *WatchManager
}

type Resource struct {
    APIVersion string
    Kind       string
    Metadata   ObjectMeta
    Spec       interface{}
    Status     interface{}
}

type ObjectMeta struct {
    Name            string
    Namespace       string
    UID             string
    ResourceVersion string
    Generation      int64
    CreationTimestamp time.Time
    Labels          map[string]string
    Annotations     map[string]string
}

func (api *APIServer) CreateResource(ctx context.Context, resource *Resource) (*Resource, error) {
    // 1. 认证
    user, err := api.authenticator.Authenticate(ctx)
    if err != nil {
        return nil, err
    }
    
    // 2. 授权
    if !api.authorizer.Authorize(user, "create", resource) {
        return nil, ErrForbidden
    }
    
    // 3. 验证
    if err := api.validator.Validate(resource); err != nil {
        return nil, err
    }
    
    // 4. 设置元数据
    resource.Metadata.UID = generateUID()
    resource.Metadata.CreationTimestamp = time.Now()
    resource.Metadata.ResourceVersion = "1"
    
    // 5. 存储
    if err := api.storage.Create(resource); err != nil {
        return nil, err
    }
    
    // 6. 通知观察者
    api.watchers.NotifyWatchers(WatchEvent{
        Type:   EventTypeAdded,
        Object: resource,
    })
    
    return resource, nil
}

func (api *APIServer) UpdateResource(ctx context.Context, resource *Resource) (*Resource, error) {
    // 获取当前版本
    current, err := api.storage.Get(resource.Metadata.Namespace, resource.Metadata.Name)
    if err != nil {
        return nil, err
    }
    
    // 检查资源版本
    if resource.Metadata.ResourceVersion != current.Metadata.ResourceVersion {
        return nil, ErrConflict
    }
    
    // 更新版本号
    newVersion, _ := strconv.ParseInt(current.Metadata.ResourceVersion, 10, 64)
    resource.Metadata.ResourceVersion = strconv.FormatInt(newVersion+1, 10)
    resource.Metadata.Generation = current.Metadata.Generation + 1
    
    // 存储更新
    if err := api.storage.Update(resource); err != nil {
        return nil, err
    }
    
    // 通知观察者
    api.watchers.NotifyWatchers(WatchEvent{
        Type:   EventTypeModified,
        Object: resource,
    })
    
    return resource, nil
}

### 3.2 调度器
```go
type Scheduler struct {
    nodeCache     *NodeCache
    podQueue      *PodQueue
    algorithms    []SchedulingAlgorithm
    predicates    []PredicateFunction
    priorities    []PriorityFunction
    extenders     []SchedulerExtender
}

type Pod struct {
    ObjectMeta
    Spec   PodSpec
    Status PodStatus
}

type PodSpec struct {
    Containers       []Container
    NodeSelector     map[string]string
    Affinity         *Affinity
    Tolerations      []Toleration
    ResourceRequests ResourceRequirements
    ResourceLimits   ResourceRequirements
}

type Node struct {
    ObjectMeta
    Spec   NodeSpec
    Status NodeStatus
}

type NodeStatus struct {
    Capacity        ResourceList
    Allocatable     ResourceList
    Conditions      []NodeCondition
    AllocatedPods   []*Pod
}

func (s *Scheduler) SchedulePod(pod *Pod) (*ScheduleResult, error) {
    // 1. 预选阶段 - 过滤不符合条件的节点
    feasibleNodes, err := s.findFeasibleNodes(pod)
    if err != nil {
        return nil, err
    }
    
    if len(feasibleNodes) == 0 {
        return nil, ErrNoFeasibleNodes
    }
    
    // 2. 优选阶段 - 对可行节点打分
    prioritizedNodes, err := s.prioritizeNodes(pod, feasibleNodes)
    if err != nil {
        return nil, err
    }
    
    // 3. 选择最优节点
    selectedNode := s.selectBestNode(prioritizedNodes)
    
    // 4. 绑定Pod到节点
    binding := &Binding{
        ObjectMeta: pod.ObjectMeta,
        Target: ObjectReference{
            Kind: "Node",
            Name: selectedNode.Name,
        },
    }
    
    return &ScheduleResult{
        SuggestedHost: selectedNode.Name,
        Binding:       binding,
    }, nil
}

func (s *Scheduler) findFeasibleNodes(pod *Pod) ([]*Node, error) {
    allNodes := s.nodeCache.GetNodes()
    var feasibleNodes []*Node
    
    for _, node := range allNodes {
        // 应用预选算法
        feasible := true
        for _, predicate := range s.predicates {
            if !predicate(pod, node) {
                feasible = false
                break
            }
        }
        
        if feasible {
            feasibleNodes = append(feasibleNodes, node)
        }
    }
    
    return feasibleNodes, nil
}

func (s *Scheduler) prioritizeNodes(pod *Pod, nodes []*Node) ([]NodeScore, error) {
    scores := make([]NodeScore, len(nodes))
    
    for i, node := range nodes {
        totalScore := 0
        
        // 应用优先级算法
        for _, priority := range s.priorities {
            score := priority(pod, node)
            totalScore += score
        }
        
        scores[i] = NodeScore{
            Node:  node,
            Score: totalScore,
        }
    }
    
    // 按分数排序
    sort.Slice(scores, func(i, j int) bool {
        return scores[i].Score > scores[j].Score
    })
    
    return scores, nil
}

// 资源适配预选算法
func ResourceFitPredicate(pod *Pod, node *Node) bool {
    // 计算节点剩余资源
    available := calculateAvailableResources(node)
    
    // 计算Pod资源需求
    required := calculatePodResourceRequirements(pod)
    
    // 检查CPU
    if required.CPU > available.CPU {
        return false
    }
    
    // 检查内存
    if required.Memory > available.Memory {
        return false
    }
    
    // 检查存储
    if required.Storage > available.Storage {
        return false
    }
    
    return true
}

// 节点亲和性预选算法
func NodeAffinityPredicate(pod *Pod, node *Node) bool {
    if pod.Spec.Affinity == nil || pod.Spec.Affinity.NodeAffinity == nil {
        return true
    }
    
    nodeAffinity := pod.Spec.Affinity.NodeAffinity
    
    // 检查必须满足的条件
    if nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution != nil {
        for _, term := range nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms {
            if !matchNodeSelectorTerm(node, term) {
                return false
            }
        }
    }
    
    return true
}

// 资源平衡优先级算法
func ResourceBalancePriority(pod *Pod, node *Node) int {
    available := calculateAvailableResources(node)
    capacity := node.Status.Capacity
    
    // 计算资源使用率
    cpuUsage := float64(capacity.CPU-available.CPU) / float64(capacity.CPU)
    memUsage := float64(capacity.Memory-available.Memory) / float64(capacity.Memory)
    
    // 计算平衡分数(使用率越均衡分数越高)
    balance := 1.0 - math.Abs(cpuUsage-memUsage)
    
    return int(balance * 100)
}

3.3 控制器管理器

type ControllerManager struct {
    controllers map[string]Controller
    workqueue   workqueue.RateLimitingInterface
    informers   map[string]cache.SharedIndexInformer
}

type Controller interface {
    Run(stopCh <-chan struct{}) error
    Reconcile(key string) error
}

type ReplicaSetController struct {
    client        ClientInterface
    rsInformer    cache.SharedIndexInformer
    podInformer   cache.SharedIndexInformer
    workqueue     workqueue.RateLimitingInterface
}

type ReplicaSet struct {
    ObjectMeta
    Spec   ReplicaSetSpec
    Status ReplicaSetStatus
}

type ReplicaSetSpec struct {
    Replicas int32
    Selector *LabelSelector
    Template PodTemplateSpec
}

type ReplicaSetStatus struct {
    Replicas             int32
    FullyLabeledReplicas int32
    ReadyReplicas        int32
    AvailableReplicas    int32
    ObservedGeneration   int64
}

func (rsc *ReplicaSetController) Reconcile(key string) error {
    namespace, name, err := cache.SplitMetaNamespaceKey(key)
    if err != nil {
        return err
    }
    
    // 获取ReplicaSet
    rs, err := rsc.getReplicaSet(namespace, name)
    if err != nil {
        return err
    }
    
    // 获取匹配的Pods
    pods, err := rsc.getPodsForReplicaSet(rs)
    if err != nil {
        return err
    }
    
    // 计算需要的副本数
    desiredReplicas := *rs.Spec.Replicas
    currentReplicas := int32(len(pods))
    
    if currentReplicas < desiredReplicas {
        // 需要创建新的Pod
        diff := desiredReplicas - currentReplicas
        return rsc.createPods(rs, int(diff))
    } else if currentReplicas > desiredReplicas {
        // 需要删除多余的Pod
        diff := currentReplicas - desiredReplicas
        return rsc.deletePods(rs, pods, int(diff))
    }
    
    // 更新状态
    return rsc.updateReplicaSetStatus(rs, pods)
}

func (rsc *ReplicaSetController) createPods(rs *ReplicaSet, count int) error {
    for i := 0; i < count; i++ {
        pod := rsc.createPodFromTemplate(rs)
        
        if err := rsc.client.CreatePod(pod); err != nil {
            return err
        }
    }
    return nil
}

func (rsc *ReplicaSetController) deletePods(rs *ReplicaSet, pods []*Pod, count int) error {
    // 选择要删除的Pod(优先删除未就绪的Pod)
    podsToDelete := rsc.selectPodsToDelete(pods, count)
    
    for _, pod := range podsToDelete {
        if err := rsc.client.DeletePod(pod.Namespace, pod.Name); err != nil {
            return err
        }
    }
    
    return nil
}

type DeploymentController struct {
    client           ClientInterface
    deployInformer   cache.SharedIndexInformer
    rsInformer       cache.SharedIndexInformer
    workqueue        workqueue.RateLimitingInterface
}

func (dc *DeploymentController) Reconcile(key string) error {
    namespace, name, err := cache.SplitMetaNamespaceKey(key)
    if err != nil {
        return err
    }
    
    deployment, err := dc.getDeployment(namespace, name)
    if err != nil {
        return err
    }
    
    // 获取关联的ReplicaSets
    replicaSets, err := dc.getReplicaSetsForDeployment(deployment)
    if err != nil {
        return err
    }
    
    // 执行部署策略
    switch deployment.Spec.Strategy.Type {
    case DeploymentStrategyRecreate:
        return dc.rolloutRecreate(deployment, replicaSets)
    case DeploymentStrategyRollingUpdate:
        return dc.rolloutRollingUpdate(deployment, replicaSets)
    default:
        return fmt.Errorf("unsupported deployment strategy: %s", deployment.Spec.Strategy.Type)
    }
}

func (dc *DeploymentController) rolloutRollingUpdate(deployment *Deployment, replicaSets []*ReplicaSet) error {
    newRS := dc.findNewReplicaSet(deployment, replicaSets)
    oldRSs := dc.findOldReplicaSets(deployment, replicaSets)
    
    // 计算滚动更新参数
    maxUnavailable := dc.calculateMaxUnavailable(deployment)
    maxSurge := dc.calculateMaxSurge(deployment)
    
    // 扩容新ReplicaSet
    if newRS != nil {
        scaledUp, err := dc.scaleUpNewReplicaSet(deployment, newRS, maxSurge)
        if err != nil {
            return err
        }
        
        if scaledUp {
            return nil // 等待下次调和
        }
    }
    
    // 缩容旧ReplicaSets
    scaledDown, err := dc.scaleDownOldReplicaSets(deployment, oldRSs, maxUnavailable)
    if err != nil {
        return err
    }
    
    if scaledDown {
        return nil // 等待下次调和
    }
    
    // 清理旧ReplicaSets
    return dc.cleanupOldReplicaSets(deployment, oldRSs)
}

### 3.4 Kubelet节点代理
```go
type Kubelet struct {
    nodeName        string
    nodeStatus      *NodeStatus
    podManager      *PodManager
    containerRuntime ContainerRuntime
    volumeManager   *VolumeManager
    networkPlugin   NetworkPlugin
    imageManager    *ImageManager
    statusManager   *StatusManager
}

type PodManager struct {
    pods          map[string]*Pod
    podWorkers    map[string]*PodWorker
    workQueue     workqueue.RateLimitingInterface
    secretManager *SecretManager
    configMapManager *ConfigMapManager
}

func (k *Kubelet) SyncPod(pod *Pod) error {
    // 1. 创建Pod沙箱
    podSandbox, err := k.containerRuntime.CreatePodSandbox(pod)
    if err != nil {
        return err
    }
    
    // 2. 设置网络
    if err := k.networkPlugin.SetUpPod(pod.Namespace, pod.Name, podSandbox.ID); err != nil {
        return err
    }
    
    // 3. 挂载卷
    if err := k.volumeManager.MountVolumes(pod); err != nil {
        return err
    }
    
    // 4. 拉取镜像
    for _, container := range pod.Spec.Containers {
        if err := k.imageManager.PullImage(container.Image); err != nil {
            return err
        }
    }
    
    // 5. 创建容器
    for _, container := range pod.Spec.Containers {
        containerConfig := k.buildContainerConfig(pod, &container)
        
        containerID, err := k.containerRuntime.CreateContainer(podSandbox.ID, containerConfig)
        if err != nil {
            return err
        }
        
        if err := k.containerRuntime.StartContainer(containerID); err != nil {
            return err
        }
    }
    
    // 6. 更新Pod状态
    return k.statusManager.UpdatePodStatus(pod)
}

func (k *Kubelet) buildContainerConfig(pod *Pod, container *Container) *ContainerConfig {
    config := &ContainerConfig{
        Name:    container.Name,
        Image:   container.Image,
        Command: container.Command,
        Args:    container.Args,
        Env:     k.buildEnvironmentVariables(pod, container),
        Mounts:  k.buildMounts(pod, container),
        Resources: &ContainerResources{
            CPULimit:    container.Resources.Limits.CPU,
            MemoryLimit: container.Resources.Limits.Memory,
            CPURequest:  container.Resources.Requests.CPU,
            MemoryRequest: container.Resources.Requests.Memory,
        },
    }
    
    return config
}

func (k *Kubelet) buildEnvironmentVariables(pod *Pod, container *Container) []EnvVar {
    var envVars []EnvVar
    
    for _, env := range container.Env {
        switch {
        case env.Value != "":
            envVars = append(envVars, EnvVar{
                Name:  env.Name,
                Value: env.Value,
            })
        case env.ValueFrom != nil:
            value, err := k.resolveEnvVarSource(pod, env.ValueFrom)
            if err == nil {
                envVars = append(envVars, EnvVar{
                    Name:  env.Name,
                    Value: value,
                })
            }
        }
    }
    
    return envVars
}

### 3.5 服务发现和负载均衡
```go
type ServiceController struct {
    client         ClientInterface
    serviceInformer cache.SharedIndexInformer
    endpointsInformer cache.SharedIndexInformer
    workqueue      workqueue.RateLimitingInterface
}

type Service struct {
    ObjectMeta
    Spec   ServiceSpec
    Status ServiceStatus
}

type ServiceSpec struct {
    Selector        map[string]string
    Ports          []ServicePort
    Type           ServiceType
    ClusterIP      string
    ExternalIPs    []string
    LoadBalancerIP string
}

type Endpoints struct {
    ObjectMeta
    Subsets []EndpointSubset
}

type EndpointSubset struct {
    Addresses         []EndpointAddress
    NotReadyAddresses []EndpointAddress
    Ports            []EndpointPort
}

func (sc *ServiceController) Reconcile(key string) error {
    namespace, name, err := cache.SplitMetaNamespaceKey(key)
    if err != nil {
        return err
    }
    
    service, err := sc.getService(namespace, name)
    if err != nil {
        return err
    }
    
    // 查找匹配的Pods
    pods, err := sc.getPodsForService(service)
    if err != nil {
        return err
    }
    
    // 构建Endpoints
    endpoints := sc.buildEndpoints(service, pods)
    
    // 更新Endpoints
    return sc.updateEndpoints(endpoints)
}

func (sc *ServiceController) buildEndpoints(service *Service, pods []*Pod) *Endpoints {
    endpoints := &Endpoints{
        ObjectMeta: ObjectMeta{
            Name:      service.Name,
            Namespace: service.Namespace,
        },
    }
    
    var readyAddresses []EndpointAddress
    var notReadyAddresses []EndpointAddress
    
    for _, pod := range pods {
        address := EndpointAddress{
            IP: pod.Status.PodIP,
            TargetRef: &ObjectReference{
                Kind:      "Pod",
                Namespace: pod.Namespace,
                Name:      pod.Name,
                UID:       pod.UID,
            },
        }
        
        if sc.isPodReady(pod) {
            readyAddresses = append(readyAddresses, address)
        } else {
            notReadyAddresses = append(notReadyAddresses, address)
        }
    }
    
    if len(readyAddresses) > 0 || len(notReadyAddresses) > 0 {
        subset := EndpointSubset{
            Addresses:         readyAddresses,
            NotReadyAddresses: notReadyAddresses,
            Ports:            sc.buildEndpointPorts(service.Spec.Ports),
        }
        endpoints.Subsets = append(endpoints.Subsets, subset)
    }
    
    return endpoints
}

type KubeProxy struct {
    serviceInformer   cache.SharedIndexInformer
    endpointsInformer cache.SharedIndexInformer
    iptablesManager   *IPTablesManager
    ipvsManager       *IPVSManager
    proxyMode         ProxyMode
}

func (kp *KubeProxy) syncService(service *Service) error {
    switch kp.proxyMode {
    case ProxyModeIPTables:
        return kp.syncServiceIPTables(service)
    case ProxyModeIPVS:
        return kp.syncServiceIPVS(service)
    default:
        return fmt.Errorf("unsupported proxy mode: %s", kp.proxyMode)
    }
}

func (kp *KubeProxy) syncServiceIPTables(service *Service) error {
    // 获取服务的Endpoints
    endpoints, err := kp.getEndpointsForService(service)
    if err != nil {
        return err
    }
    
    // 生成iptables规则
    rules := kp.generateIPTablesRules(service, endpoints)
    
    // 应用规则
    return kp.iptablesManager.ApplyRules(rules)
}

func (kp *KubeProxy) generateIPTablesRules(service *Service, endpoints *Endpoints) []IPTablesRule {
    var rules []IPTablesRule
    
    for _, port := range service.Spec.Ports {
        // 创建服务链
        serviceChain := fmt.Sprintf("KUBE-SVC-%s", generateChainName(service, port))
        
        // 主规则:将流量导向服务链
        mainRule := IPTablesRule{
            Table: "nat",
            Chain: "KUBE-SERVICES",
            Rule: fmt.Sprintf("-d %s/32 -p %s -m %s --dport %d -j %s",
                service.Spec.ClusterIP, port.Protocol, port.Protocol, port.Port, serviceChain),
        }
        rules = append(rules, mainRule)
        
        // 为每个endpoint创建规则
        for i, subset := range endpoints.Subsets {
            for j, address := range subset.Addresses {
                endpointChain := fmt.Sprintf("KUBE-SEP-%s", generateEndpointChainName(service, port, i, j))
                
                // 负载均衡规则
                probability := 1.0 / float64(len(subset.Addresses))
                lbRule := IPTablesRule{
                    Table: "nat",
                    Chain: serviceChain,
                    Rule: fmt.Sprintf("-m statistic --mode random --probability %.6f -j %s",
                        probability, endpointChain),
                }
                rules = append(rules, lbRule)
                
                // DNAT规则
                dnatRule := IPTablesRule{
                    Table: "nat",
                    Chain: endpointChain,
                    Rule: fmt.Sprintf("-p %s -j DNAT --to-destination %s:%d",
                        port.Protocol, address.IP, port.TargetPort),
                }
                rules = append(rules, dnatRule)
            }
        }
    }
    
    return rules
}

容器编排系统通过声明式API、智能调度和自动化管理,为云原生应用提供了强大的基础设施平台。


🎯 场景引入

你打开App,

你打开手机准备使用设计容器编排系统服务。看似简单的操作背后,系统面临三大核心挑战:

  • 挑战一:高并发——如何在百万级 QPS 下保持低延迟?
  • 挑战二:高可用——如何在节点故障时保证服务不中断?
  • 挑战三:数据一致性——如何在分布式环境下保证数据正确?

📈 容量估算

假设 DAU 1000 万,人均日请求 50 次

指标数值
数据总量10 TB+
日写入量~100 GB
写入 TPS~5 万/秒
读取 QPS~20 万/秒
P99 读延迟< 10ms
节点数10-50
副本因子3

❓ 高频面试问题

Q1:容器编排系统的核心设计原则是什么?

参考正文中的架构设计部分,核心原则包括:高可用(故障自动恢复)、高性能(低延迟高吞吐)、可扩展(水平扩展能力)、一致性(数据正确性保证)。面试时需结合具体场景展开。

Q2:容器编排系统在大规模场景下的主要挑战是什么?

  1. 性能瓶颈:随着数据量和请求量增长,单节点无法承载;2) 一致性:分布式环境下的数据一致性保证;3) 故障恢复:节点故障时的自动切换和数据恢复;4) 运维复杂度:集群管理、监控、升级。

Q3:如何保证容器编排系统的高可用?

  1. 多副本冗余(至少 3 副本);2) 自动故障检测和切换(心跳 + 选主);3) 数据持久化和备份;4) 限流降级(防止雪崩);5) 多机房/多活部署。

Q4:容器编排系统的性能优化有哪些关键手段?

  1. 缓存(减少重复计算和 IO);2) 异步处理(非关键路径异步化);3) 批量操作(减少网络往返);4) 数据分片(并行处理);5) 连接池复用。

Q5:容器编排系统与同类方案相比有什么优劣势?

参考方案对比表格。选型时需考虑:团队技术栈、数据规模、延迟要求、一致性需求、运维成本。没有银弹,需根据业务场景权衡取舍。



| 方案一 | 简单实现 | 低 | 适合小规模 | | 方案二 | 中等复杂度 | 中 | 适合中等规模 | | 方案三 | 高复杂度 ⭐推荐 | 高 | 适合大规模生产环境 |

🚀 架构演进路径

阶段一:单机版 MVP(用户量 < 10 万)

  • 单体应用 + 单机数据库,功能验证优先
  • 适用场景:产品早期验证,快速迭代

阶段二:基础版分布式(用户量 10 万 - 100 万)

  • 应用层水平扩展 + 数据库主从分离
  • 引入 Redis 缓存热点数据,降低数据库压力
  • 适用场景:业务增长期

阶段三:生产级高可用(用户量 > 100 万)

  • 微服务拆分,独立部署和扩缩容
  • 数据库分库分表 + 消息队列解耦
  • 多机房部署,异地容灾
  • 全链路监控 + 自动化运维

✅ 架构设计检查清单

检查项状态
缓存策略
分布式架构
数据一致性
安全设计
水平扩展

📡 监控与可观测性

关键 Metrics

指标告警阈值说明
P99 延迟> 500ms → Warning核心接口响应时间
错误率> 1% → Critical5xx 错误占比
QPS超过容量 80% → Warning扩容预警

可观测性三支柱

  • Metrics:Prometheus + Grafana 采集系统和业务指标
  • Logging:ELK 集中日志,结构化日志便于检索
  • Tracing:Jaeger 分布式链路追踪,定位跨服务瓶颈