概述
Pod 手动水平伸缩
ReplicaSet 控制器负责维持指定数量的 Pod 实例始终正常运行。这指定的 Pod 实例数在声明的工作负载资源对象中指定,如Deployment等。
通常定义在 Deployment 的.spec.replicas字段中的 Pod 副本数是个静态值,如果需要对应用进行扩容,我们可以手动向 Kubernetes 发出命令对应用程序进行伸缩。我们可以根据业务的变化,提前做好准备,如在可预见的即将到来的电商促销活动之前对应用进行扩容,在促销活动结束后再对应用进行缩容。手动伸缩需要我们人类对应用程序进行观察和预测,然后决定扩容的数量。但是,手动伸缩的方式不适用于经常变更且需要不断适应的动态工作负载模式。
命令式手动伸缩
kubectl scale my-app --replicas=4
Pod 自动水平伸缩
Kubernetes 提供了 Pod 自动水平伸缩(HorizontalPodAutoscaler,简称HPA)能力让我们定义可变的动态应用程序容量,容量不是固定不变的,可能会变大也可能会缩小,但是 Kubernetes 会确保有足够的容量来处理不同的负载情况。
以下示例定义了,当 Pod 的平均 cpu 使用率达到 50% 时触发水平扩展,但最大 Pod 运行数不超过 10 个。
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app
spec:
scaleTargetRef: ①
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 1 ②
maxReplicas: 10 ③
metrics:
- type: Resource ④
resource:
name: cpu ⑤
target:
type: Utilization
averageUtilization: 50
① 声明了与此 HPA 相关联的 Deployment 的引用。
② 最小运行 Pod 副本数。
③ 最大运行 Pod 副本数。
④ 指定监控数据度量指标的类别为Resource。如 cpu 利用率是一个Resource类别的 metric 度量指标。度量指标类别还可选ContainerResource、Pods、External、Object等,详细描述可参考 Kubernetes 官方文档或源代码。
⑤ 声明理想状态下的 cpu 使用率,即已使用的 cpu 与请求的 cpu 资源百分比。假设 Pod 的.spec.resources.requests.cpu定义为 200m,那么当 cpu 平均使用超过 100m,即 50% 以上时则会触发 Pod 水平扩容。[1]
以上内容对是 HPA 的基本介绍, HPA 的使用并不是本文所要讲述的重点,本篇是对 HPA 控制器源代码的解析。篇幅略长,感谢您的耐心阅读。
HPA 作为 Kubernetes 的一种资源,每一种资源都有对应的控制器,控制器负责维持资源的状态始终与我们在资源对象中声明的期望状态一致,在这个维持状态一致的过程中即使发生了错误,控制器也会将该项工作重新加入工作队列,等待一下次重试处理,直到状态一致为止,这个过程,称之为reconcile(调协),本系统文章的核心也是围绕 HPA 的调协逻辑。
提示:由于篇幅有限,为了方便阅读, Kubernetes 源代码摘取部分作了一些省略。省略的代码并不会影响对源码主要逻辑的理解,被省略的源码对应的位置会添加相应的注释。代码的引用也给出了在 Kubernetes项目中相对的路径。本篇文章基于
release-1.24版本。
MetricsClient
HPA 控制逻辑建立在相关的监控度量指标数据之上,所以需要通过metrics client来获取不同度量指标类别的监控数据。如 Pod 的cpu和memory的使用占比等。
MetricsClient 接口定义
代码路径:pkg/controller/podautoscaler/metrics/interfaces.go
// MetricsClient knows how to query a remote interface to retrieve container-level
// resource metrics as well as pod-level arbitrary metrics
type MetricsClient interface {
// GetResourceMetric gets the given resource metric (and an associated oldest timestamp)
// for the specified named container in all pods matching the specified selector in the given namespace and when
// the container is an empty string it returns the sum of all the container metrics.
GetResourceMetric(ctx context.Context, resource v1.ResourceName, namespace string, selector labels.Selector, container string) (PodMetricsInfo, time.Time, error)
// GetRawMetric gets the given metric (and an associated oldest timestamp)
// for all pods matching the specified selector in the given namespace
GetRawMetric(metricName string, namespace string, selector labels.Selector, metricSelector labels.Selector) (PodMetricsInfo, time.Time, error)
// GetObjectMetric gets the given metric (and an associated timestamp) for the given
// object in the given namespace
GetObjectMetric(metricName string, namespace string, objectRef *autoscaling.CrossVersionObjectReference, metricSelector labels.Selector) (int64, time.Time, error)
// GetExternalMetric gets all the values of a given external metric
// that match the specified selector.
GetExternalMetric(metricName string, namespace string, selector labels.Selector) ([]int64, time.Time, error)
}
该接口大致声明了以下能力:
- 获取 Resource、Pod 等相关度量指标
- 获取集群内部对象的相关度量指标
- 获取集群外部的相关度量指标
源代码的注释描述得非常详细了,不多加赘述了。自我吐槽:水平有限,翻译不出原汁原味(o^^o)。
创建 metrics 客户端
MetricsClient 的实现分别由三个具体的metrics client提供,它们与 MetricsClient 接口的能力描述相对应,各司其职,职责单一。
① MetricsV1beta1Client,用于和 API 组metrics.k8s.io提供的接口进行集成,获取集群内置度量指标的客户端,如 Pod 监控指标数据的获取等。
② CustomMetricsClient,自定义度量指标客户端。
③ ExternalMetricsClient,集群外部度量指标客户端。
代码路径:cmd/kube-controller-manager/app/autoscaling.go
func startHPAControllerWithRESTClient(ctx context.Context, controllerContext ControllerContext) (controller.Interface, bool, error) {
clientConfig := controllerContext.ClientBuilder.ConfigOrDie("horizontal-pod-autoscaler")
hpaClient := controllerContext.ClientBuilder.ClientOrDie("horizontal-pod-autoscaler")
apiVersionsGetter := custom_metrics.NewAvailableAPIsGetter(hpaClient.Discovery())
// ...省略部分代码
// 创建 metrics 客户端
metricsClient := metrics.NewRESTMetricsClient(
// ① 集群内置度量指标客户端
resourceclient.NewForConfigOrDie(clientConfig),
// ② 自定义度量指标客户端
custom_metrics.NewForConfig(clientConfig, controllerContext.RESTMapper, apiVersionsGetter),
// ③ 集群外部度量指标客户端
external_metrics.NewForConfigOrDie(clientConfig),
)
// ...
}
HorizontalController
HPA 控制器负责 Pod 自动水平伸缩逻辑的控制。
创建 HPA 控制器
创建HorizontalController并运行Run函数。HPA控制器需要依赖一些 API 和组件才能工作。
① Scale 对象客户端,用于查询和修改实现了scale 子资源对象的scales,可理解成 Pod 要伸缩的实例副本数。
Scale 定义的相关代码位于 staging/src/k8s.io/api/autoscaling/v1/types.go
// Scale represents a scaling request for a resource.
type Scale struct {
metav1.TypeMeta
metav1.ObjectMeta
Spec ScaleSpec
Status ScaleStatus
}
type ScaleSpec struct {
// desired number of instances for the scaled object.
Replicas int32
}
② HorizontalPodAutoscalersGetter,定义了 HPA 资源对象的Create、Update、Delete、Get、List、Watch等相关方法。
③ Metrics 客户端,前文已重点介绍过,不再赘述。
④ HPA Informer,用于监听 HPA 对象的新增、修改、删除变更事件并且维护了一个本地内存缓存查询列表。
⑤ Pod Informer,用于监听 Pod 对象的新增、修改、删除变更事件并且维护了一个本地内存缓存查询列表。
代码路径:cmd/kube-controller-manager/app/autoscaling.go
func startHPAControllerWithMetricsClient(ctx context.Context, controllerContext ControllerContext, metricsClient metrics.MetricsClient) (controller.Interface, bool, error) {
hpaClient := controllerContext.ClientBuilder.ClientOrDie("horizontal-pod-autoscaler")
hpaClientConfig := controllerContext.ClientBuilder.ConfigOrDie("horizontal-pod-autoscaler")
scaleKindResolver := scale.NewDiscoveryScaleKindResolver(hpaClient.Discovery())
scaleClient, err := scale.NewForConfig(hpaClientConfig, controllerContext.RESTMapper, dynamic.LegacyAPIPathResolverFunc, scaleKindResolver)
// ...省略部分代码
go podautoscaler.NewHorizontalController(
// v1.CoreV1Interface
// 参考 "k8s.io/client-go/kubernetes/typed/core/v1" 包
hpaClient.CoreV1(),
// ① Scale 客户端
scaleClient,
// ② HorizontalPodAutoscalersGetter
hpaClient.AutoscalingV2(),
// RESTMapper 用于处理 GroupVersionResource 与 GroupVersionKind 之间的映射关系
controllerContext.RESTMapper,
// ③ Metrics 客户端
metricsClient,
// ④ HPA Informer
controllerContext.InformerFactory.Autoscaling().V2().HorizontalPodAutoscalers(),
// ⑤ Pod Informer
controllerContext.InformerFactory.Core().V1().Pods(),
// ...
).Run(ctx)
return nil, true, nil
}
⑥ Pod 副本数计算器,这是一个非常重要的角色,它的职责是根据当前采集到的相关度量指标监控数据和 HPA 声明的理想资源使用率经过一系列运算得到 Pod 的容量scale,以此来决策 Pod 副本容量的伸缩。
代码位置:pkg/controller/podautoscaler/horizontal.go
// 注:篇幅有限,省略了参数定义部分代码
// NewHorizontalController creates a new HorizontalController.
func NewHorizontalController() *HorizontalController {
hpaController := &HorizontalController{}
// ...省略部分代码
// ⑥ Pod 副本数计算器
replicaCalc := NewReplicaCalculator(
metricsClient,
hpaController.podLister,
tolerance,
cpuInitializationPeriod,
delayOfInitialReadinessStatus,
)
hpaController.replicaCalc = replicaCalc
return hpaController
}
启动 HPA 控制器
启动 HPA 控制器,监听 HPA 对象变更事件并进行同步。
代码位置:pkg/controller/podautoscaler/horizontal.go
// Run begins watching and syncing.
func (a *HorizontalController) Run(ctx context.Context) {
defer utilruntime.HandleCrash()
defer a.queue.ShutDown()
klog.Infof("Starting HPA controller")
defer klog.Infof("Shutting down HPA controller")
if !cache.WaitForNamedCacheSync("HPA", ctx.Done(), a.hpaListerSynced, a.podListerSynced) {
return
}
// start a single worker (we may wish to start more in the future)
go wait.UntilWithContext(ctx, a.worker, time.Second)
<-ctx.Done()
}
函数 woker
不停地从工作队列里取出相对应的数据进行处理,直到工作队列shutdown关闭为止。
func (a *HorizontalController) worker(ctx context.Context) {
// 不停地从工作队列里取出相对应的数据进行处理
for a.processNextWorkItem(ctx) {
}
klog.Infof("horizontal pod autoscaler controller worker shutting down")
}
func (a *HorizontalController) processNextWorkItem(ctx context.Context) bool {
key, quit := a.queue.Get()
// 工作队列关闭,终止 for 循环
if quit {
return false
}
defer a.queue.Done(key)
deleted, err := a.reconcileKey(ctx, key.(string))
if err != nil {
utilruntime.HandleError(err)
}
if !deleted {
a.queue.AddRateLimited(key)
}
return true
}
函数 reconcileKey
工作队列中的数据为 HPA 对象的 namespace 和 name 组合而成的字符串key,对应集群中某个 HPA 对象。
① 将 key 拆解为 namespace 和 name。
② 获取集群中相对应的 HPA 对象。
③ 如果 HPA 对象已经被删除,清理其相关联的一些数据,释放资源。
④ 对 HPA 对象进行调协。
func (a *HorizontalController) reconcileKey(ctx context.Context, key string) (deleted bool, err error) {
// ① 将 key 拆解为 namespace 和 name
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return true, err
}
// ② 获取集群中相对应的 HPA 对象
hpa, err := a.hpaLister.HorizontalPodAutoscalers(namespace).Get(name)
// ③ 如果 HPA 对象已经被删除,清理其相关联的一些数据,释放资源
if errors.IsNotFound(err) {
klog.Infof("Horizontal Pod Autoscaler %s has been deleted in %s", name, namespace)
delete(a.recommendations, key)
delete(a.scaleUpEvents, key)
delete(a.scaleDownEvents, key)
return true, nil
}
if err != nil {
return false, err
}
// ④ 对 HPA 对象进行调协
return false, a.reconcileAutoscaler(ctx, hpa, key)
}
函数 reconcileAutoscaler
该函数是 HPA 调协逻辑的整体描述,即自动扩缩容的主体实现,代码拆解如下:
① 获取 HPA 关联目标的当前 scale, 即 pod 副本数。
以上述 HPA YAML 示例来讲,即获取和 HPA 位于同一 namespace 下的 name 为 my-app 的 Deployment 对象。
② 边界值处理。
- 目标对象当前 pod 副本数为 0 ,对此目标对象禁用自动伸缩。
- 目标对象当前副本数小于 HPA 定义的最小副本数,调整副本数为 HPA 最小副本数。
- 目标对象当前副本数大于 HPA 定义的最大副本数,调整副本数为 HPA 最大副本数。
③ 根据监控数据度量指标计算出调整后的 scale,即期望的 pod 副本数。
④ 对上一步计算出来的期望 pod 副本数做进一步调整以确定最终的期望副本数。
上述示例中并未指定 .spec.behavior字段相关配置,即对目标对象的容量伸缩运用默认的规则(HPAScalingRules),将在文章后面进行介绍。
⑤ 更新目标对象对应的 scale。
⑥ 更新 HPA 对象的状态。
func (a *HorizontalController) reconcileAutoscaler(ctx context.Context, hpaShared *autoscalingv2.HorizontalPodAutoscaler, key string) error {
// make a copy so that we never mutate the shared informer cache (conversion can mutate the object)
hpa := hpaShared.DeepCopy()
hpaStatusOriginal := hpa.Status.DeepCopy()
// HPA 关联目标对象的引用
reference := fmt.Sprintf("%s/%s/%s", hpa.Spec.ScaleTargetRef.Kind, hpa.Namespace, hpa.Spec.ScaleTargetRef.Name)
targetGV, err := schema.ParseGroupVersion(hpa.Spec.ScaleTargetRef.APIVersion)
// ...省略 err 处理相关代码
targetGK := schema.GroupKind{
Group: targetGV.Group,
Kind: hpa.Spec.ScaleTargetRef.Kind,
}
// 获取 GroupVersionResource 和 GroupVersionKind 的映射关系
mappings, err := a.mapper.RESTMappings(targetGK)
// ...
// ① 获取 HPA 关联目标的当前 scale
scale, targetGR, err := a.scaleForResourceMappings(ctx, hpa.Namespace, hpa.Spec.ScaleTargetRef.Name, mappings)
// ...
setCondition(hpa, autoscalingv2.AbleToScale, v1.ConditionTrue, "SucceededGetScale", "the HPA controller was able to get the target's current scale")
currentReplicas := scale.Spec.Replicas
a.recordInitialRecommendation(currentReplicas, key)
var (
metricStatuses []autoscalingv2.MetricStatus
metricDesiredReplicas int32
metricName string
)
desiredReplicas := int32(0)
rescaleReason := ""
var minReplicas int32
if hpa.Spec.MinReplicas != nil {
minReplicas = *hpa.Spec.MinReplicas
} else {
minReplicas = 1
}
rescale := true
if scale.Spec.Replicas == 0 && minReplicas != 0 { // ② 边界值处理
// Autoscaling is disabled for this resource
desiredReplicas = 0
rescale = false
setCondition(hpa, autoscalingv2.ScalingActive, v1.ConditionFalse, "ScalingDisabled", "scaling is disabled since the replica count of the target is zero")
} else if currentReplicas > hpa.Spec.MaxReplicas { // ② 边界值处理
rescaleReason = "Current number of replicas above Spec.MaxReplicas"
desiredReplicas = hpa.Spec.MaxReplicas
} else if currentReplicas < minReplicas { // ② 边界值处理
rescaleReason = "Current number of replicas below Spec.MinReplicas"
desiredReplicas = minReplicas
} else {
var metricTimestamp time.Time
// ③ 计算出调整后的 scale
metricDesiredReplicas, metricName, metricStatuses, metricTimestamp, err = a.computeReplicasForMetrics(ctx, hpa, scale, hpa.Spec.Metrics)
if err != nil {
a.setCurrentReplicasInStatus(hpa, currentReplicas)
if err := a.updateStatusIfNeeded(ctx, hpaStatusOriginal, hpa); err != nil {
utilruntime.HandleError(err)
}
a.eventRecorder.Event(hpa, v1.EventTypeWarning, "FailedComputeMetricsReplicas", err.Error())
return fmt.Errorf("failed to compute desired number of replicas based on listed metrics for %s: %v", reference, err)
}
klog.V(4).Infof("proposing %v desired replicas (based on %s from %s) for %s", metricDesiredReplicas, metricName, metricTimestamp, reference)
rescaleMetric := ""
if metricDesiredReplicas > desiredReplicas {
desiredReplicas = metricDesiredReplicas
rescaleMetric = metricName
}
if desiredReplicas > currentReplicas {
rescaleReason = fmt.Sprintf("%s above target", rescaleMetric)
}
if desiredReplicas < currentReplicas {
rescaleReason = "All metrics below target"
}
if hpa.Spec.Behavior == nil {
// ④ 对上一步计算出来的期望 pod 副本数做进一步调整以确定最终的期望副本数
desiredReplicas = a.normalizeDesiredReplicas(hpa, key, currentReplicas, desiredReplicas, minReplicas)
} else {
desiredReplicas = a.normalizeDesiredReplicasWithBehaviors(hpa, key, currentReplicas, desiredReplicas, minReplicas)
}
rescale = desiredReplicas != currentReplicas
}
// ⑤ 更新目标对象对应的 scale 对象
if rescale {
scale.Spec.Replicas = desiredReplicas
_, err = a.scaleNamespacer.Scales(hpa.Namespace).Update(ctx, targetGR, scale, metav1.UpdateOptions{})
// ...省略部分代码
} else {
klog.V(4).Infof("decided not to scale %s to %v (last scale time was %s)", reference, desiredReplicas, hpa.Status.LastScaleTime)
desiredReplicas = currentReplicas
}
// ⑥ 更新 HPA 对象的状态
a.setStatus(hpa, currentReplicas, desiredReplicas, metricStatuses, rescale)
return a.updateStatusIfNeeded(ctx, hpaStatusOriginal, hpa)
}
函数 computeReplicasForMetrics
根据监控数据度量指标计算期望 pod 副本数。HPA 对象可能关注了多项监控指标数据,对于每一项监控指标都得做相对应的处理,从而评估 pod 容量。
① 将 scale 对象状态中保存的 selector 字段解析为 k8s.io/apimachinery/pkg/labels 包下的 Selector,即标签选择器,用于筛选该 HPA 需要关注的那些 pod 的监控数据度量指标。
② 循环遍历 HPA 对象中定义的每一项度量指标规范
- 通过 selector 选择器筛选目标 pod 列表
- 获取目标 pod 对应的该度量指标数据,与度量指标规范作运算得出相对应的 pod 副本缩放比例,从而计算出该指标下应该当伸缩的 pod 副本数。
- 取循环过程中产生的最大 pod 副本数,作为返回结果。
③ 如果 HPA 对象中定义的所有度量指标在计算过程中皆出现错误时,或者部分指标计算失败,但其余指标计算出的结果可能会导致缩容的情况时,将返回第一次计算失败的错误信息,本次调协不对容量进行调整。其他情况,将对容量进行缩放。
// computeReplicasForMetrics computes the desired number of replicas for the metric specifications listed in the HPA,
// returning the maximum of the computed replica counts, a description of the associated metric, and the statuses of
// all metrics computed.
func (a *HorizontalController) computeReplicasForMetrics(ctx context.Context, hpa *autoscalingv2.HorizontalPodAutoscaler, scale *autoscalingv1.Scale,
metricSpecs []autoscalingv2.MetricSpec) (replicas int32, metric string, statuses []autoscalingv2.MetricStatus, timestamp time.Time, err error) {
if scale.Status.Selector == "" {
errMsg := "selector is required"
a.eventRecorder.Event(hpa, v1.EventTypeWarning, "SelectorRequired", errMsg)
setCondition(hpa, autoscalingv2.ScalingActive, v1.ConditionFalse, "InvalidSelector", "the HPA target's scale is missing a selector")
return 0, "", nil, time.Time{}, fmt.Errorf(errMsg)
}
// ① 标签选择器 selector 用于筛选 pod
selector, err := labels.Parse(scale.Status.Selector)
// ...省略 err 处理相关代码
specReplicas := scale.Spec.Replicas
statusReplicas := scale.Status.Replicas
statuses = make([]autoscalingv2.MetricStatus, len(metricSpecs))
invalidMetricsCount := 0
var invalidMetricError error
var invalidMetricCondition autoscalingv2.HorizontalPodAutoscalerCondition
// ② 循环遍历 HPA 对象中定义的每一项度量指标规范
for i, metricSpec := range metricSpecs {
replicaCountProposal, metricNameProposal, timestampProposal, condition, err := a.computeReplicasForMetric(ctx, hpa, metricSpec, specReplicas, statusReplicas, selector, &statuses[i])
if err != nil {
if invalidMetricsCount <= 0 {
invalidMetricCondition = condition
invalidMetricError = err
}
invalidMetricsCount++
}
// 取循环过程中产生的最大 pod 副本数
if err == nil && (replicas == 0 || replicaCountProposal > replicas) {
timestamp = timestampProposal
replicas = replicaCountProposal
metric = metricNameProposal
}
}
// If all metrics are invalid or some are invalid and we would scale down,
// return an error and set the condition of the hpa based on the first invalid metric.
// Otherwise set the condition as scaling active as we're going to scale
if invalidMetricsCount >= len(metricSpecs) || (invalidMetricsCount > 0 && replicas < specReplicas) {
setCondition(hpa, invalidMetricCondition.Type, invalidMetricCondition.Status, invalidMetricCondition.Reason, invalidMetricCondition.Message)
return 0, "", statuses, time.Time{}, fmt.Errorf("invalid metrics (%v invalid out of %v), first error is: %v", invalidMetricsCount, len(metricSpecs), invalidMetricError)
}
setCondition(hpa, autoscalingv2.ScalingActive, v1.ConditionTrue, "ValidMetricFound", "the HPA was able to successfully calculate a replica count from %s", metric)
return replicas, metric, statuses, timestamp, nil
}
函数 computeReplicasForMetric
计算单个度量指标下期望的 pod 副本数,针对 Metric 的类型分别处理。这里主要介绍Resource类别,ContainerResource与Resource的逻辑较为接近,一个针对 pod 级别的资源使用情况,另一个是针对 pod 内容器级别的资源使用情况。
func (a *HorizontalController) computeReplicasForMetric(ctx context.Context, hpa *autoscalingv2.HorizontalPodAutoscaler, spec autoscalingv2.MetricSpec,
specReplicas, statusReplicas int32, selector labels.Selector, status *autoscalingv2.MetricStatus) (replicaCountProposal int32, metricNameProposal string,
timestampProposal time.Time, condition autoscalingv2.HorizontalPodAutoscalerCondition, err error) {
// 篇幅有限,省略 case 语句中 err 处理相关代码
switch spec.Type {
// Object 类别
case autoscalingv2.ObjectMetricSourceType:
metricSelector, err := metav1.LabelSelectorAsSelector(spec.Object.Metric.Selector)
replicaCountProposal, timestampProposal, metricNameProposal, condition, err = a.computeStatusForObjectMetric(specReplicas, statusReplicas, spec, hpa, selector, status, metricSelector)
// Pods 类别
case autoscalingv2.PodsMetricSourceType:
metricSelector, err := metav1.LabelSelectorAsSelector(spec.Pods.Metric.Selector)
replicaCountProposal, timestampProposal, metricNameProposal, condition, err = a.computeStatusForPodsMetric(specReplicas, spec, hpa, selector, status, metricSelector)
// Resource 类别
case autoscalingv2.ResourceMetricSourceType:
replicaCountProposal, timestampProposal, metricNameProposal, condition, err = a.computeStatusForResourceMetric(ctx, specReplicas, spec, hpa, selector, status)
// ContainerResource 类别
case autoscalingv2.ContainerResourceMetricSourceType:
replicaCountProposal, timestampProposal, metricNameProposal, condition, err = a.computeStatusForContainerResourceMetric(ctx, specReplicas, spec, hpa, selector, status)
// External 类别
case autoscalingv2.ExternalMetricSourceType:
replicaCountProposal, timestampProposal, metricNameProposal, condition, err = a.computeStatusForExternalMetric(specReplicas, statusReplicas, spec, hpa, selector, status)
default:
errMsg := fmt.Sprintf("unknown metric source type %q", string(spec.Type))
err = fmt.Errorf(errMsg)
condition := a.getUnableComputeReplicaCountCondition(hpa, "InvalidMetricSourceType", err)
return 0, "", time.Time{}, condition, err
}
return replicaCountProposal, metricNameProposal, timestampProposal, autoscalingv2.HorizontalPodAutoscalerCondition{}, nil
}
度量指标类型 MetricSourceType
MetricSourceType 常量枚举值的定义如下。
代码位于 pkg/apis/autoscaling/types.go
// MetricSourceType indicates the type of metric.
type MetricSourceType string
const (
// ObjectMetricSourceType is a metric describing a kubernetes object
// (for example, hits-per-second on an Ingress object).
ObjectMetricSourceType MetricSourceType = "Object"
// PodsMetricSourceType is a metric describing each pod in the current scale
// target (for example, transactions-processed-per-second). The values
// will be averaged together before being compared to the target value.
PodsMetricSourceType MetricSourceType = "Pods"
// ResourceMetricSourceType is a resource metric known to Kubernetes, as
// specified in requests and limits, describing each pod in the current
// scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available
// to normal per-pod metrics (the "pods" source).
ResourceMetricSourceType MetricSourceType = "Resource"
// ContainerResourceMetricSourceType is a resource metric known to Kubernetes, as
// specified in requests and limits, describing a single container in each pod in the current
// scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available
// to normal per-pod metrics (the "pods" source).
ContainerResourceMetricSourceType MetricSourceType = "ContainerResource"
// ExternalMetricSourceType is a global metric that is not associated
// with any Kubernetes object. It allows autoscaling based on information
// coming from components running outside of cluster
// (for example length of queue in cloud messaging service, or
// QPS from loadbalancer running outside of cluster).
ExternalMetricSourceType MetricSourceType = "External"
)
函数 computeStatusForResourceMetricGeneric
计算 Resource 类别度量指标下期望的 pod 副本数。
① 处理 HPA 关注的资源平均使用量相关的逻辑。
② 处理 HPA 关注的资源平均利用率相关的逻辑。
前文示例中的 HPA 对象关注的是 cpu 的平均利用率,下文主要介绍这种情况的处理。
func (a *HorizontalController) computeStatusForResourceMetricGeneric(ctx context.Context, currentReplicas int32, target autoscalingv2.MetricTarget,
resourceName v1.ResourceName, namespace string, container string, selector labels.Selector) (replicaCountProposal int32,
metricStatus *autoscalingv2.MetricValueStatus, timestampProposal time.Time, metricNameProposal string,
condition autoscalingv2.HorizontalPodAutoscalerCondition, err error) {
// ① HPA 关注资源的平均使用量
if target.AverageValue != nil {
var rawProposal int64
replicaCountProposal, rawProposal, timestampProposal, err := a.replicaCalc.GetRawResourceReplicas(ctx, currentReplicas, target.AverageValue.MilliValue(), resourceName, namespace, selector, container)
if err != nil {
return 0, nil, time.Time{}, "", condition, fmt.Errorf("failed to get %s utilization: %v", resourceName, err)
}
metricNameProposal = fmt.Sprintf("%s resource", resourceName.String())
status := autoscalingv2.MetricValueStatus{
AverageValue: resource.NewMilliQuantity(rawProposal, resource.DecimalSI),
}
return replicaCountProposal, &status, timestampProposal, metricNameProposal, autoscalingv2.HorizontalPodAutoscalerCondition{}, nil
}
if target.AverageUtilization == nil {
errMsg := "invalid resource metric source: neither a utilization target nor a value target was set"
return 0, nil, time.Time{}, "", condition, fmt.Errorf(errMsg)
}
// ② HPA 关注资源的平均利用率
targetUtilization := *target.AverageUtilization
replicaCountProposal, percentageProposal, rawProposal, timestampProposal, err := a.replicaCalc.GetResourceReplicas(ctx, currentReplicas, targetUtilization, resourceName, namespace, selector, container)
if err != nil {
return 0, nil, time.Time{}, "", condition, fmt.Errorf("failed to get %s utilization: %v", resourceName, err)
}
metricNameProposal = fmt.Sprintf("%s resource utilization (percentage of request)", resourceName)
status := autoscalingv2.MetricValueStatus{
AverageUtilization: &percentageProposal,
AverageValue: resource.NewMilliQuantity(rawProposal, resource.DecimalSI),
}
return replicaCountProposal, &status, timestampProposal, metricNameProposal, autoscalingv2.HorizontalPodAutoscalerCondition{}, nil
}
函数 GetResourceReplicas
HPA 关注资源的平均利用率,根据给定资源的理想目标资源利用率百分比来计算期望的 pod 副本数。
① 通过 MetricsClient 获取 pod 指定资源的度量指标数据。
② 根据选择器查询目标 pod 列表。
③ 对 pod 进行分组,剔除掉存在噪音干扰因素的 pod 的度量指标数据,这些 pod 的度量指标数据不纳入后续的计算,这些 pod 为:
- .Status.Phase == "Failed" 状态的 pod
- pod.DeletionTimestamp != nil 即已删除的 pod
- PodCondition 类型 "Ready" 的 status 为 False 的 pod,即未就绪的 pod。PodReady 意味着 Pod 能够处理请求,并且应该添加到所有匹配的 Service 对象的负载均衡列表中。这里即把还不能够提供正常业务能力的 pod 去除,避免干扰计算结果的准确性。
④ 统计 pod 对应的该目标资源(如 cpu)请求大小。
- 如果是 pod 维度则统计 pod 中所有容器申请的资源大小累计值。
- 如果指定了 pod 中某一特定容器名,将仅统计该容器申请的资源大小。
⑤ 如果没有获取到 pod 对应资源的度量指标数据,则返回对应的错误信息。
⑥ 根据采集的资源度量指标数据、pod 请求的资源大小、HPA 定义的理想状态下资源使用率百分比等计算出实际资源利用率与理想资源利用率的比率(以下称为 usageRatio)等。
⑦ 如果不存在缺失监控度量指标数据的 pod ,也不存在 unready 状态的 pod 时:
- 当 usageRatio 在 tolerance 误差范围内,不对容量做缩放,使用原本的副本数。
- usageRatio 不在误差范围内,计算缩放后的 pod 副本数并返回。
⑧ 当前 usageRatio < 1.0 即表示需要缩容(scale-down)时:
- 令所有缺少了度量指标数据的 pod 的 metric 值为其请求的资源大小,即假设其资源使用率为 100%。
- 当重新计算 usageRatio 时,不会导致 usageRatio 被进一步缩小从而导致最终计算出的 pod 容量低于实际预期。
⑨ 当前 usageRatio > 1.0 即表示需要扩容(scale-up)时:
- 令所有缺少了度量指标数据的 pod 的 metric 值为 0,即假设其资源使用率为 0%。
- 令所有 unready 状态的 pod 的 metric 值为 0,即假设其资源使用率为 0%。
- 当重新计算 usageRatio 时,不会导致 usageRatio 被进一步放大从而导致最终计算出的 pod 容量超出实际预期。
⑩ 重新计算新的比率 usageRatio。
- 当重新计算的 usageRatio 在误差范围内,不对容量做缩放,使用原本的副本数。
- 当前后两次计算的资源利用率对应的伸缩行为不一致时,即前一次计算结果为扩容,重新计算结果为缩容,或者前一次计算结果为缩容,重新计算结果为扩容,则使用当前 pod 副本数。
- 当 usageRatio 与计算得出的期望 pod 副本数表示的伸缩行为一致时,则对 pod 容量进行缩放,否则继续使用当前容量。
代码位于 pkg/controller/podautoscaler/replica_calculator.go
// GetResourceReplicas calculates the desired replica count based on a target resource utilization percentage
// of the given resource for pods matching the given selector in the given namespace, and the current replica count
func (c *ReplicaCalculator) GetResourceReplicas(ctx context.Context, currentReplicas int32, targetUtilization int32, resource v1.ResourceName, namespace string, selector labels.Selector, container string) (replicaCount int32, utilization int32, rawUtilization int64, timestamp time.Time, err error) {
// 篇幅有限,忽略部分 err 相关处理代码
// ① 获取 pod 的资源度量指标数据
metrics, timestamp, err := c.metricsClient.GetResourceMetric(ctx, resource, namespace, selector, container)
// ② 根据选择器查询目标 pod 列表
podList, err := c.podLister.Pods(namespace).List(selector)
// ...
itemsLen := len(podList)
if itemsLen == 0 {
return 0, 0, 0, time.Time{}, fmt.Errorf("no pods returned by selector while calculating replica count")
}
// ③ 对 pod 进行分组
// 1. readyPodCount:准备就绪的 pod
// 2. unreadyPods:未就绪的 pod
// 3. missingPods:缺失度量指标数据的 pod
// 4. ignoredPods 忽略已删除的和 Failed 状态的 pod
readyPodCount, unreadyPods, missingPods, ignoredPods := groupPods(podList, metrics, resource, c.cpuInitializationPeriod, c.delayOfInitialReadinessStatus)
// ③ 排除忽略的 pod 的度量指标数据
removeMetricsForPods(metrics, ignoredPods)
// ③ 排除未就绪的 pod 的度量指标数据
removeMetricsForPods(metrics, unreadyPods)
// ④ 统计 pod 对应的目标资源(如 cpu)请求大小
requests, err := calculatePodRequests(podList, container, resource)
// ...
// ⑤ 没有对应的 pod 度量指标数据,返回相对应的错误。
if len(metrics) == 0 {
return 0, 0, 0, time.Time{}, fmt.Errorf("did not receive metrics for any ready pods")
}
// ⑥ 计算出实际资源利用率与理想资源利用率的比率
usageRatio, utilization, rawUtilization, err := metricsclient.GetResourceUtilizationRatio(metrics, requests, targetUtilization)
if err != nil {
return 0, 0, 0, time.Time{}, err
}
rebalanceIgnored := len(unreadyPods) > 0 && usageRatio > 1.0
// ⑦ 不存在缺失监控度量指标数据的 pod,也不存在 unready 状态的 pod
if !rebalanceIgnored && len(missingPods) == 0 {
// ⑦ 在容错的误差范围内,使用当前的 pod 容量
if math.Abs(1.0-usageRatio) <= c.tolerance {
// return the current replicas if the change would be too small
return currentReplicas, utilization, rawUtilization, timestamp, nil
}
// ⑦ 向上取整,计算 pod 容量
// if we don't have any unready or missing pods, we can calculate the new replica count no
return int32(math.Ceil(usageRatio * float64(readyPodCount))), utilization, rawUtilization, timestamp, nil
}
if len(missingPods) > 0 {
// ⑧ & ⑨ 为缺失监控数据指标的 pod 填充缺省值
if usageRatio < 1.0 {
// on a scale-down, treat missing pods as using 100% of the resource request
for podName := range missingPods {
metrics[podName] = metricsclient.PodMetric{Value: requests[podName]}
}
} else if usageRatio > 1.0 {
// on a scale-up, treat missing pods as using 0% of the resource request
for podName := range missingPods {
metrics[podName] = metricsclient.PodMetric{Value: 0}
}
}
}
// ⑨ 计算结果为扩容时,令未就绪的 pod 的资源利用率为 0%
if rebalanceIgnored {
// on a scale-up, treat unready pods as using 0% of the resource request
for podName := range unreadyPods {
metrics[podName] = metricsclient.PodMetric{Value: 0}
}
}
// ⑩ 计算新的比率
// re-run the utilization calculation with our new numbers
newUsageRatio, _, _, err := metricsclient.GetResourceUtilizationRatio(metrics, requests, targetUtilization)
if err != nil {
return 0, utilization, rawUtilization, time.Time{}, err
}
// ⑩ 前后两次计算伸缩行为不一致时使用当前 pod 容量
if math.Abs(1.0-newUsageRatio) <= c.tolerance || (usageRatio < 1.0 && newUsageRatio > 1.0) || (usageRatio > 1.0 && newUsageRatio < 1.0) {
// return the current replicas if the change would be too small,
// or if the new usage ratio would cause a change in scale direction
return currentReplicas, utilization, rawUtilization, timestamp, nil
}
newReplicas := int32(math.Ceil(newUsageRatio * float64(len(metrics))))
if (newUsageRatio < 1.0 && newReplicas > currentReplicas) || (newUsageRatio > 1.0 && newReplicas < currentReplicas) {
// return the current replicas if the change of metrics length would cause a change in scale direction
return currentReplicas, utilization, rawUtilization, timestamp, nil
}
// ⑩ 返回计算后的期望 pod 容量大小
// return the result, where the number of replicas considered is
// however many replicas factored into our calculation
return newReplicas, utilization, rawUtilization, timestamp, nil
}
函数 GetResourceUtilizationRatio
计算出实际资源利用率与理想资源利用率的比率,进而用于评估 pod 容量。
① 累加采集到的 pod 资源度量指标数据之和及它们对应的资源请求值之和。
② 计算资源实际使用率百分比。
③ 计算出实际资源利用率与理想资源利用率的比率等。
代码位于 pkg/controller/podautoscaler/metrics/utilization.go
// GetResourceUtilizationRatio takes in a set of metrics, a set of matching requests,
// and a target utilization percentage, and calculates the ratio of
// desired to actual utilization (returning that, the actual utilization, and the raw average value)
func GetResourceUtilizationRatio(metrics PodMetricsInfo, requests map[string]int64, targetUtilization int32) (utilizationRatio float64, currentUtilization int32, rawAverageValue int64, err error) {
metricsTotal := int64(0)
requestsTotal := int64(0)
numEntries := 0
// ① 累加采集到的 pod 资源度量指标数据之和及它们对应的资源请求值之和
for podName, metric := range metrics {
request, hasRequest := requests[podName]
if !hasRequest {
// we check for missing requests elsewhere, so assuming missing requests == extraneous metrics
continue
}
metricsTotal += metric.Value
requestsTotal += request
numEntries++
}
// if the set of requests is completely disjoint from the set of metrics,
// then we could have an issue where the requests total is zero
if requestsTotal == 0 {
return 0, 0, 0, fmt.Errorf("no metrics returned matched known pods")
}
// ② 计算资源实际使用率百分比
currentUtilization = int32((metricsTotal * 100) / requestsTotal)
// ③ 计算出实际资源利用率和理想资源利用率的比率等
return float64(currentUtilization) / float64(targetUtilization), currentUtilization, metricsTotal / int64(numEntries), nil
}
运用伸缩行为规则
对根据监控数据度量指标计算出来的期望 pod 副本数做进一步调整以确定最终的期望副本数。
- 未指定
.spec.behavior字段时,运用默认的行为规则。 - 当指定
.spec.behavior字段时,运用指定的行为规则。
// 上文代码回顾, HPA 调协逻辑
func (a *HorizontalController) reconcileAutoscaler(ctx context.Context, hpaShared *autoscalingv2.HorizontalPodAutoscaler, key string) error {
// ...
if hpa.Spec.Behavior == nil { // 未指定`.spec.behavior`字段时,运用默认的 HPAScalingRules
desiredReplicas = a.normalizeDesiredReplicas(hpa, key, currentReplicas, desiredReplicas, minReplicas)
} else { // 运用指定的行为规则
desiredReplicas = a.normalizeDesiredReplicasWithBehaviors(hpa, key, currentReplicas, desiredReplicas, minReplicas)
}
rescale = desiredReplicas != currentReplicas
}
函数 normalizeDesiredReplicas
未指定.spec.behavior字段时,运用默认的行为规则。
① 取稳定时间窗口内的最大推荐 pod 副本数。时间窗口默认大小为 5min,可通过 controller manager 启动参数--horizontal-pod-autoscaler-downscale-stabilization指定。
② 容量不得低于 HPA 指定的最小值,也不得高于 HPA 指定的最大值。
func (a *HorizontalController) normalizeDesiredReplicas(hpa *autoscalingv2.HorizontalPodAutoscaler, key string, currentReplicas int32, prenormalizedDesiredReplicas int32, minReplicas int32) int32 {
//
stabilizedRecommendation := a.stabilizeRecommendation(key, prenormalizedDesiredReplicas)
// ...省略代码
desiredReplicas, condition, reason := convertDesiredReplicasWithRules(currentReplicas, stabilizedRecommendation, minReplicas, hpa.Spec.MaxReplicas)
// ...省略代码
return desiredReplicas
}
// ① 取稳定时间窗口内的最大推荐容量
func (a *HorizontalController) stabilizeRecommendation(key string, prenormalizedDesiredReplicas int32) int32 {
maxRecommendation := prenormalizedDesiredReplicas
foundOldSample := false
oldSampleIndex := 0
cutoff := time.Now().Add(-a.downscaleStabilisationWindow)
for i, rec := range a.recommendations[key] {
if rec.timestamp.Before(cutoff) {
foundOldSample = true
oldSampleIndex = i
} else if rec.recommendation > maxRecommendation {
maxRecommendation = rec.recommendation
}
}
// 替换旧的推荐容量大小
if foundOldSample {
a.recommendations[key][oldSampleIndex] = timestampedRecommendation{prenormalizedDesiredReplicas, time.Now()}
} else {
a.recommendations[key] = append(a.recommendations[key], timestampedRecommendation{prenormalizedDesiredReplicas, time.Now()})
}
return maxRecommendation
}
var (
scaleUpLimitFactor = 2.0
scaleUpLimitMinimum = 4.0
)
func calculateScaleUpLimit(currentReplicas int32) int32 {
return int32(math.Max(scaleUpLimitFactor*float64(currentReplicas), scaleUpLimitMinimum))
}
// convertDesiredReplicas performs the actual normalization, without depending on `HorizontalController` or `HorizontalPodAutoscaler`
func convertDesiredReplicasWithRules(currentReplicas, desiredReplicas, hpaMinReplicas, hpaMaxReplicas int32) (int32, string, string) {
var minimumAllowedReplicas int32
var maximumAllowedReplicas int32
var possibleLimitingCondition string
var possibleLimitingReason string
minimumAllowedReplicas = hpaMinReplicas
// Do not upscale too much to prevent incorrect rapid increase of the number of master replicas caused by
// bogus CPU usage report from heapster/kubelet (like in issue #32304).
scaleUpLimit := calculateScaleUpLimit(currentReplicas)
// 限制本次扩容 pod 副本数上限,避免 pod 副本数过快增长
if hpaMaxReplicas > scaleUpLimit {
maximumAllowedReplicas = scaleUpLimit
possibleLimitingCondition = "ScaleUpLimit"
possibleLimitingReason = "the desired replica count is increasing faster than the maximum scale rate"
} else {
maximumAllowedReplicas = hpaMaxReplicas
possibleLimitingCondition = "TooManyReplicas"
possibleLimitingReason = "the desired replica count is more than the maximum replica count"
}
// ② 容量不得低于 HPA 指定的最小值,也不得高于 HPA 指定的最大值
if desiredReplicas < minimumAllowedReplicas {
possibleLimitingCondition = "TooFewReplicas"
possibleLimitingReason = "the desired replica count is less than the minimum replica count"
return minimumAllowedReplicas, possibleLimitingCondition, possibleLimitingReason
} else if desiredReplicas > maximumAllowedReplicas {
return maximumAllowedReplicas, possibleLimitingCondition, possibleLimitingReason
}
return desiredReplicas, "DesiredWithinRange", "the desired count is within the acceptable range"
}
至此,已经介绍完了 HPA 的整体逻辑。
小结
HPA 大致处理逻辑为:
- 根据 HPA 对象的定义,读取与容量伸缩相关的 Pod 监控指标。
- 根据当前指标数据和 HPA 中定义的理想指标值评估 Pod 容量。
用公式表示如下:
期望副本数当前指标值理想指标值当前副本数
感谢阅读,水平有限,难免出错,欢迎指正。
参考资料
[1]《Kubernetes 设计模式》: 弹性伸缩