1. 首先你要有一个k8s集群
我这里有一个kubeadm安装的一主二从集群 [](kubeadm部署一主二从k8s集群 - 掘金 (juejin.cn))
2. mysql是有状态的,数据肯定是要持久化的,所以要有pv和pvc
实际生产场景中肯定不是用的hostPath的pv,因为你没法确定pod调度到哪个节点上。我这里直接在node01上开辟pv,并给该节点打上app=mysql标签,在deploy中nodeSelector选择该节点,这样mysql就一定会调度到node01节点上。
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/ubuntu/mysql-volume"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
3. deploy 和 service
这里通过nodePort暴露服务,用于测试;然后再改为ClusterIP,因为等应用部署到集群内之后就只在集群内部访问mysql,不需要暴露到集群外部
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mysql
spec:
replicas: 1
selector:
matchLabels:
app: auth-mysql
template:
metadata:
labels:
app: auth-mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: auth-mysql-pv-claim
nodeSelector:
app: mysql
service.yaml (NodePort)
apiVersion: v1
kind: Service
metadata:
name: auth-mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
app: auth-mysql
service.yaml (ClusterIP)
apiVersion: v1
kind: Service
metadata:
name: auth-mysql
spec:
type: ClusterIP # 可以不写,默认,只在集群内部暴露服务
ports:
- port: 3306
targetPort: 3306
selector:
app: auth-mysql
4 连接验证(NodePort,集群外部访问)
可以看到NodePort开辟的是30306端口,在集群外部连接;
创建数据库和表:
发现pv中已经映射了pod的/var/lib/mysql目录:
删除pod,自动重拉,发现数据还在,证明持久化成功。
5. 修改service成ClusterIP,新建临时pod,从集群内访问
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- mysql -h auth-mysql -p123456