开启掘金成长之旅!这是我参与「掘金日新计划 · 12 月更文挑战」的第20天,点击查看活动详情
内存资源限制实践
1.设置容器的内存请求和限制
1.创建一个拥有一个容器的pod。容器将会请求100MiB内存,并且内存会被限制在200MiB以内。
[root@k8s-master /k8s/deploy]# vim memory-requests.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-requests
spec:
containers:
- name: memory-requests-container
image: polinux/stress
command: ["stress"]
args: ["--vm","1","--vm-bytes","150M","--vm-hang","1"]
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
[root@k8s-master /k8s/deploy]# kubectl apply -f memory-requests.yaml
pod/memory-requests created
2.检查Pod的信息
[root@k8s-master /k8s/deploy]# kubectl get pod memory-requests
NAME READY STATUS RESTARTS AGE
memory-requests 1/1 Running 0 50s
[root@k8s-master /k8s/deploy]# kubectl get pod memory-requests -o yaml
......
spec:
containers:
- args:
- --vm
- "1"
- --vm-bytes
- 150M
- --vm-hang
- "1"
command:
- stress
image: polinux/stress
imagePullPolicy: Always
name: memory-requests-container
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
......
显示该Pod中的容器的内存请求为100MiB,内存限制为200MiB
3.获取该Pod的指标数据
[root@k8s-master /k8s/deploy]# kubectl top pod
NAME CPU(cores) MEMORY(bytes)
memory-requests 46m 150Mi
输出结果显示Pod正在使用的内存约为150MiB.大于Pod请求的100MiB,但又在Pod限制的200MiB之内
4.删除Pod
[root@k8s-master /k8s/deploy]# kubectl delete pod memory-requests
pod "memory-requests" deleted
2.运行超过容器内存限制的应用
当节点拥有足够的可用内存时,容器可以使用其请求的内存。但是,容器不允许使用超过其限制的内存。如果容器分配的内存超过其限制,该容器会成为被终止的候选容器。如果容器继续消耗超出其限制的内存,则终止容器。如果终止的容器可以被重启,则kubelet会重新启动它。
1.创建一个Pod,其拥有一个容器,该容器的内存请求为100MiB,内存限制为200MiB,尝试分配超出其限制的内存
[root@k8s-master /k8s/deploy]# vim memory-requests.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-requests
spec:
containers:
- name: memory-requests-container
image: polinux/stress
command: ["stress"]
args: ["--vm","1","--vm-bytes","250M","--vm-hang","1"]
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
[root@k8s-master /k8s/deploy]# kubectl apply -f memory-requests.yaml
pod/memory-requests created
2.查看Pod,此时容器可能正在运行或被杀死。重复前面的命令,直到容器被杀掉
[root@k8s-master /k8s/deploy]# kubectl get pod memory-requests
NAME READY STATUS RESTARTS AGE
memory-requests 0/1 CrashLoopBackOff 2 (18s ago) 84s
3.查看容器的详细信息
[root@k8s-master /k8s/deploy]# kubectl describe pod memory-requests
.....
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
.....
4.删除pod
[root@k8s-master ~]# kubectl delete pod memory-requests
pod "memory-requests" deleted
3.超过节点的内存分配
Pod的调度基于请求。只有当节点拥有足够满足Pod内存请求的内存时,才会被Pod调度至节点上运行。
1.创建一个Pod,其拥有一个请求1000GiB内存的容器,这应该超过了集群中任何一台节点所拥有的内存。
[root@k8s-master /k8s/deploy]# vim memory-requests.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-requests
spec:
containers:
- name: memory-requests-container
image: polinux/stress
command: ["stress"]
args: ["--vm","1","--vm-bytes","250M","--vm-hang","1"]
resources:
requests:
memory: "100Gi"
limits:
memory: "200Gi"
2.查看Pod状态,发现处于Penging 状态。这就意外着,该pod没有被调度至任何节点上运行
[root@k8s-master /k8s/deploy]# kubectl get pod memory-requests
NAME READY STATUS RESTARTS AGE
memory-requests 0/1 Pending 0 4m56s
3.查看pod详情,输出结果显示,由于节点内存不足,该容器无法被调度
[root@k8s-master /k8s/deploy]# kubectl describe pod memory-requests
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m13s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3Insufficient memory.
Warning FailedScheduling 4m51s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3Insufficient memory.
4.删除pod
[root@k8s-master /k8s/deploy]# kubectl delete -f memory-requests.yaml
pod "memory-requests" deleted
4.如果没有指定内存限制
如果没有为容器指定内存限制,容器可无限制地使用其所在节点的所有可用内存,进而可能导致该节点调用OOM Killer。此外,如果发生OOM Kill,没有配置资源限制的容器将被杀掉的可行性更大。
在kubernetes中,可用通过LimitRange自动为其容器设定,所使用的内存资源最大最小值