须知:
Kubernetes StatefulSet允许我们为Pod分配⼀个稳定的标识和持久化存储。 Elasticsearch需要稳定的存储来保证Pod在重新调度或者重启后的数据依然不变,所以我这里使⽤StatefulSet来管理Pod。
部署三节点的elasticsearch,并使用nfs做数据永久持久化存储(StorageClass)
环境:
kubernetes版本:v1.19.3
docker版本:20.10.5
虚拟机:4台
master:192.168.29.101
node1: 192.168.29.102
node2: 192.168.29.103
nfs server: 192.168.29.104
系统:4台虚拟机均为centos7.9
yaml文件存放目录:/root/k8s/elasticsearch (你们随意)
步骤:
cd /root/k8s/elasticsearch
1)创建一个命名空间:logging,把后面创建的资源都指定到该namespace下
kubectl create ns logging
2)创建StorageClass
vim elasticsearch-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: es-data-db-elasticsearch
namespace: logging
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
复制代码
注意:
provisioner这个字段的值需要和后面在创建elasticsearch pod(StatefulSet)的时候保持一致,否则会绑定失败
3)创建RBAC
vim elasticsearch-storageclass-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner-elasticsearch
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner-elasticsearch
subjects:
- kind: ServiceAccount
name: nfs-provisioner-elasticsearch
namespace: logging
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner-elasticsearch
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner-elasticsearch
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner-elasticsearch
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: logging
roleRef:
kind: Role
name: leader-locking-nfs-provisioner-elasticsearch
apiGroup: rbac.authorization.k8s.io
复制代码
注意:
1、ServiceAccount、ClusterRoleBinding、ClusterRole、RoleBinding、Role的名字不要和现有环境运行中pod中的资源名字冲突了
2、namespace指定为 logging
3)指定一个用于关联StorageClass的镜像
vim elasticsearch-storageclass-deploy.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner-elasticsearch
namespace: logging
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-provisioner-elasticsearch
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: nfs-provisioner-elasticsearch
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner-elasticsearch
spec:
serviceAccount: nfs-provisioner-elasticsearch
containers:
- name: nfs-provisioner-elasticsearch
image: registry.cn-chengdu.aliyuncs.com/wangyunan_images_public/nfs-client-provisioner:v1
volumeMounts:
- name: nfs-client-root-elasticsearch
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.29.104
- name: NFS_PATH
value: /nfs/data/elasticsearch
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
volumes:
- name: nfs-client-root-elasticsearch
nfs:
server: 192.168.29.104
path: /nfs/data/elasticsearch
复制代码
注意:
1、环境变量中的PROVISIONER_NAME对应的value值fuseim.pri/ifs 就是在创建elasticsearch-storageclass.yaml的时候指定的provisioner字段对应那个值
NFS_SERVER对应你NFS虚拟机的IP地址
NFS_PATH这个路径需要在你的NFS服务器上提前把该路径创建好,否则在创建pod的时候会报错
4)创建elasticsearch service(无头服务)
vim elasticsearch-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: logging
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- name: rest
port: 9200
- name: inter-node
port: 9300
复制代码
5)创建elasticsearch pod (statefulset)
vim elasticsearch-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: elasticsearch
selector:
matchLabels:
app: elasticsearch
replicas: 3
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: registry.cn-chengdu.aliyuncs.com/wangyunan_images_public/elasticsearch-oss:6.4.3
ports:
- containerPort: 9200
name: rest
protocol: TCP
- name: inter-node
containerPort: 9300
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: discovery.zen.minimum_master_nodes
value: "2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: network.host
value: "0.0.0.0"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: es-data-db-elasticsearch
resources:
requests:
storage: 8Gi
复制代码
6)创建资源:
kubectl create -f .
7)验证:
在master节点通过请求⼀个 REST API 来检查 Elasticsearch 集群是否正常运⾏:
kubectl port-forward es-cluster-0 9200:9200 --namespace=logging
重新开个终端执行以下命令:
curl http://localhost:9200/_cluster/state?pretty
正常来说,应该会看到类似于如下的信息:(我这里部署好后,node1和node2的内存不足了,pod在反复的重启,返回的信息为空,大家到这一步的时候请留意下node节点的内存是否充足)
热门IT岗集锦:docs.qq.com/sheet/DVWps…