什么是Custom Metrics? 可以解决什么问题?
juejin.cn/post/729935… 中讲到了 Aggregator APIServer的扩展机制
也讲到了promethues 通过metric server---> cAdvisor--->采集pod指标的方式
场景
我们想通过监控pod的请求数量 来决定是否要扩容pod的副本 以支撑更多的请求流量。
需要实现一个自定义的meritic 收集Pod 收到的 HTTP 请求数量,然后根据指定 Pod 收到的 HTTP 请求数量来配置 Auto Scaling 策略
例如访问以下URL 就可以获取到监控指标 这里是访问的 pod对象 后面会介绍访问service对象
https://<apiserver_ip>/apis/custom-metrics.metrics.k8s.io/v1beta1/namespaces/default/pods/sample-metrics-app/http_requests
请求这个URL时底层的访问路径:
client --- Aggregator apiserver ---> 路由到后端的Custom Metrics APIServer ---> 去 Prometheus查询sample-metrics-app的http_requests指标
sample-metrics-app的http_requests指标 需要提前录入到promethues中 需要从pod中进行采集
如何暴露http_requests 指标的值?
让 Pod 里的应用本身暴露出一个 /metrics API,然后在这个 API 里返回自己收到的 HTTP 的请求的数量
Custom Metrics 具体的使用方式
前提已经部署好了一套k8s集群
step1 部署promethues
使用 Prometheus Operator 来完成
具体的操作可以参考 juejin.cn/post/729818…
step2 部署Custom Metrics APIServer
具体yml参考 github.com/resouer/kub…
其实底层是用到了一个 k8s-prometheus-adapter 适配器
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics
labels:
app: custom-metrics-apiserver
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
spec:
tolerations:
- key: beta.kubernetes.io/arch
value: arm
effect: NoSchedule
- key: beta.kubernetes.io/arch
value: arm64
effect: NoSchedule
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-server
image: luxas/k8s-prometheus-adapter:v0.2.0-beta.0
args:
- --prometheus-url=http://sample-metrics-prom.default.svc:9090
- --metrics-relist-interval=30s
- --rate-interval=60s
- --v=10
- --logtostderr=true
ports:
- containerPort: 443
securityContext:
runAsUser: 0
---
apiVersion: v1
kind: Service
metadata:
name: api
namespace: custom-metrics
spec:
ports:
- port: 443
targetPort: 443
selector:
app: custom-metrics-apiserver
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: api
namespace: custom-metrics
version: v1beta1
可以看到创建了一个APIService资源类型 叫 v1beta1.custom.metrics.k8s.io 接下来就是怎么能访问到这个API v1beta1.custom.metrics.k8s.io/endpoint/xx…
step3 为 Custom Metrics APIServer 创建对应的 ClusterRoleBinding
kubectl create clusterrolebinding allowall-cm --clusterrole custom-metrics-server-resources --user system:anonymous
step4 部署应用 sample-metrics-app
具体yaml参考 github.com/resouer/kub…
HPA 配置
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: sample-metrics-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: sample-metrics-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: sample-metrics-app
metricName: http_requests
targetValue: 100
参数解释
scaleTargetRef中 标注的是 伸缩扩展的对象, 一旦出发策略 将会对sample-metrics-app 这个Deployment进行扩缩容
minReplicas: 2 最小副本数
maxReplicas: 10 最大副本数
metrics 中配置的是扩展的策略 监听的对象是名叫 sample-metrics-app 的 Service
通过这些字段里的定义, HPA 就可以向如下URL 发起请求来获取 Custom Metrics 的值
https://<apiserver_ip>/apis/custom-metrics.metrics.k8s.io/v1beta1/namespaces/default/services/sample-metrics-app/http_requests
测试
通过一个名叫 hey 的测试工具来为我们的应用增加一些访问压力
docker run -it -v /usr/local/bin:/go/bin golang:1.8 go get github.com/rakyll/hey
export APP_ENDPOINT=$(kubectl get svc sample-metrics-app -o template --template {{.spec.clusterIP}}); echo ${APP_ENDPOINT}
hey -n 50000 -c 1000 http://${APP_ENDPOINT}
应用 Service 的 Custom Metircs URL
curl -sSLk https:///apis/custom-metrics.metrics.k8s.io/v1beta1/namespaces/default/services/sample-metrics-app/http_requests
HPA(HorizontalPodAutoscaler) 请求 自定义API(Custom Metric APIServer)
自定义的API 其实是一个Prometheus 的Adaptor, 会去Prometheus中读取指标
Prometheus 通过 ServiceMonitor object 配置需要监控的pod和endpoints,来确定监控哪些pod的metrics
大致的请求路径
Prometheus 是如何知道采集哪些 Pod 的 /metrics API 作为监控指标的来源?
具体yaml参考 github.com/resouer/kub…
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: sample-metrics-app
labels:
service-monitor: sample-metrics-app
spec:
selector:
matchLabels:
app: sample-metrics-app
endpoints:
- port: web
前面第step4的时候 创建了一个ServiceMonitor对象
这是一个Prometheus的监控策略
Prometheus Operator用来指定被监控Pod
HPA是如何 构建出URL请求的
HPA 配置
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: sample-metrics-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: sample-metrics-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: sample-metrics-app
metricName: http_requests
targetValue: 100
HPA 构建出来的URL请求 他是怎么知道访问哪个API service
https://<apiserver_ip>/apis/custom-metrics.metrics.k8s.io/v1beta1/namespaces/default/services/sample-metrics-app/http_requests
可以查看这个配置 github.com/resouer/kub…
前面定义了一个叫v1beta1.metrics.k8s.io的 APIService资源 已经绑定到了metrics-server的Service
所以请求metrics-server的Service
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: https
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100