笔记摘自视频章节:第七章 存储 pv/pvc
主题
安装nfs, 创建pv, 通过statefulset控制pod启动流程,通过pvc匹配pv资源
操作
nfs-server准备
apt-get install nfs-server
mkdir -p /nfs
chown -R 777 /nfs
echo "/nfs *(rw,no_root_squash,no_all_squash,sync)" >> /etx/exports
/etc/init.d/rpcbind restart
[ ok ] Restarting rpcbind (via systemctl): rpcbind.service.
/etc/init.d/nfs-kernel-server restart
[ ok ] Restarting nfs-kernel-server (via systemctl): nfs-kernel-server.service.
# 检查是否配置完成
showmount -e 172.16.13.127
Export list for 172.16.13.127:
/nfs *
创建pv
根据我的nfs服务器,创建如下的pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /nfs
server: 172.16.13.127
- 创建多个pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0000
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /nfs
server: 172.16.13.127
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /nfs1
server: 172.16.13.127
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0002
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /nfs2
server: 172.16.13.127
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /nfs3
server: 172.16.13.127
pvc
- 创建pvc
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs"
resources:
requests:
storage: 1Gi
-
创建,查看创建结果发现卡住
-
查看pv使用情况,发现只有一个被使用
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emptydir-demo 2/2 Running 89 (70m ago) 6d5h 10.244.1.169 jjh-k8s-demo-node1 <none> <none>
web-0 1/1 Running 0 6m22s 10.244.3.191 jjh-k8s-node-2 <none> <none>
web-1 0/1 Pending 0 6m18s <none> <none> <none> <none>
# 分析情况
kubectl describe pod web-1
...
Warning FailedScheduling 29s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
-
分析原因,是因为现有pv类型不匹配,修改pv中的
strageClassName为nfs,accessModes为[ "ReadWriteOnce" ],重新apply一下 -
继续观察,StatusfulSet创建成功
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pv -w
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0000 1Gi RWO Recycle Bound default/www-web-1 nfs 29m
pv0001 5Gi RWO Recycle Bound default/www-web-0 nfs 29m
pv0002 5Gi RWO Recycle Bound default/www-web-2 nfs 29m
pv0003 10Gi RWX Recycle Available nfs 29m
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emptydir-demo 2/2 Running 89 (70m ago) 6d5h 10.244.1.169 jjh-k8s-demo-node1 <none> <none>
web-0 1/1 Running 0 6m50s 10.244.3.191 jjh-k8s-node-2 <none> <none>
web-1 1/1 Running 0 6m46s 10.244.1.170 jjh-k8s-demo-node1 <none> <none>
web-2 1/1 Running 0 9s 10.244.3.192 jjh-k8s-node-2 <none> <none>
有关StatefulSet
- 匹配Pod name (网络标识)的模式为:
$(statefulset名称)-S(序号),比如上面的示例: web-0, web-1, web-2
# 查看web-0 pod信息
root@jjh-k8s-demo-master:~# kubectl get pods web-0 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 29m 10.244.3.191 jjh-k8s-node-2 <none> <none>
# 测试ping ,成功
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl exec web-0 -it -- /bin/sh
/ # ping web-0.nginx
PING web-0.nginx (10.244.3.191): 56 data bytes
64 bytes from 10.244.3.191: seq=0 ttl=64 time=0.029 ms
64 bytes from 10.244.3.191: seq=1 ttl=64 time=0.056 ms
- StatefulSet为每个Pod副本创建了一个DNS域名,这个域名的格式为:
$(podname).(headless servername),也就意味着服务间是通过Pod域名来通信而非Pod IP,因为当Pod所在Node发生故障时, Pod会被飘移到其它Node上, Pod IP会发生变化,但是Pod域名不会有变化 - StatefulSet 使用
Headless服务来控制Pod的域名,这个域名的FQDN为:S(service name).S(namespace).svc.cluster.local, 其中, "cluster.local" 指的是集群的域名
# 找coredns的ip
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-65c54cc984-smdtc 1/1 Running 0 23d 10.244.0.2 jjh-k8s-demo-master <none> <none>
coredns-65c54cc984-tcbfb 1/1 Running 0 23d 10.244.0.3 jjh-k8s-demo-master <none> <none>
# 测试
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# dig -t A nginx.default.svc.cluster.local @10.244.0.2
; <<>> DiG 9.16.1-Ubuntu <<>> -t A nginx.default.svc.cluster.local @10.244.0.2
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20723
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1f8c810273eb7589 (echoed)
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN A
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN A 10.244.3.192
nginx.default.svc.cluster.local. 30 IN A 10.244.3.191
nginx.default.svc.cluster.local. 30 IN A 10.244.1.170
;; Query time: 0 msec
;; SERVER: 10.244.0.2#53(10.244.0.2)
;; WHEN: Sat Apr 02 14:46:03 UTC 2022
;; MSG SIZE rcvd: 213
- 根据volumeClaimTemplates,为每个Pod 创建一个pvc, pvc的命名规则匹配模式: (volumeClaimTemplates.name)-(pod_name). 比如上面的volumeMounts.name=www, Pod name=web-[0-2],因此创建出来的PVC 是www-web-0, www-web-1, www-web-2
- 删除Pod不会删除其pvc,手动删除pvc将自动释放pv
Statefulset的启停顺序:
- 有序部署:部署StatefulSet时,如果有多个Pod副本,它们会被顺序地创建(从0到N-1)并且,在下一个 Pod运行之前所有之前的Pod必须都是Running和Ready状态。
- 有序删除:当Pod被删除时,它们被终止的顺序是从N-1到0。
- 有序扩展:当对Pod执行扩展操作时,与部署一样,它前面的Pod必须都处于Running和Ready状态。
StatefulSet使用场景:
-
稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现。
-
稳定的网络标识符,即Pod重新调度后其PodName和HostName不变。
-
有序部署,有序扩展,基于init containers来实现。
-
有序收缩。
-
释放pv
kubectl delete svc nginx
kubectl delete pvc --all
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0000 1Gi RWO Recycle Available nfs 78m
pv0001 5Gi RWO Recycle Available nfs 77m
pv0002 5Gi RWO Recycle Available nfs 78m
pv0003 10Gi RWX Recycle Available nfs 78m