k8s安装elasticsearch并使用持久存储

5,655 阅读6分钟

前言

之前把k8s装起来了,但是没有装什么应用,只装了ingress-nginx和dashboard,看了下helm3之后决定用helm3装一下es

持久存储

因为创建的pod可以被销毁重建,临时存储的数据会丢失,而如果使用hostpath方式将数据挂载出去,当pod漂移的情况下,数据是没办法跨节点保存的,k8s就有了pv跟pvc的概念,而pvc使用storageclass可以动态创建pv。

创建nfs的storageclass

我最开始用的是helm中带的nfs-server-provisioner

使用nfs-server-provisioner

github地址:github.com/helm/charts…

去github上看了一下它的介绍,大概明白怎么用了,安装命令如下:

helm install storageclass-nfs stable/nfs-server-provisioner -f storageclass-config.yml
persistence:
  ##开启持久存储
  enabled: true
  storageClass: "-"
  ## 存储大小30g
  size: 30Gi

storageClass:
  ##设置成默认storageclassclass
  defaultClass: true

nodeSelector:
  ##安装到哪个node上
  kubernetes.io/hostname: instance-8x864u54

这种安装方式安装之后

但是其实没有成功,它需要我们提供一个pv来做存储卷,并且这个pv要与安装之后自动生成pvc绑定,

所以我们需要手动提供一个pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-nfs-server-provisioner-0
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    ## 绑定在node上的位置
    path: /data/k8s/volumes/data-nfs-server-provisioner-0
  claimRef:
    namespace: default
    ## 自动生成的pvc名字
    name: data-storageclass-nfs-nfs-server-provisioner-0

执行以下

kubectl apply -f nfs-server-pv.yml

到这里我以为成功了,知道我将写好的es chart执行了以后一直报错 ,查一下日志

kubectl describe pod elasticsearch-01 -n elasticsearch

什么,无法挂载出去?试了很多办法都不行,直接百度了 ,无可奈何之下,我只能在每个node下执行了一下命令

yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs

我是不想这么干的,但是不这么做数据挂载不出去,(知其然不知其所以然) 改完之后重新install了一下chart,并进行测试(网上借鉴的方案,链接)

pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: "nfs"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

pv:

apiVersion: apps/v1
kind: Deployment
metadata:  
  name: busybox-test
  labels:
    app.kubernetes.io/name: busybox-deployment
spec:  
  replicas: 1  
  selector:
    matchLabels:    
      app.kubernetes.io/name: busybox-deployment
  template:    
    metadata:      
      labels:        
        app.kubernetes.io/name: busybox-deployment    
    spec:      
      containers:      
        - image: busybox        
          command:          
            - sh          
            - -c          
            - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep 5; done'        
          imagePullPolicy: IfNotPresent        
          name: busybox        
          volumeMounts:          
            - name: nfs            
              mountPath: "/mnt"      
      volumes:      
        - name: nfs        
          persistentVolumeClaim:          
            claimName: nfs-pvc

使用kubectl apply -f执行一下这两个文件,结果如下:

在我选择挂载的那台node上进行寻找

可见数据已经挂载了

使用nfs-client-provisioner

nfs-server是部署一个nfs服务,然后创建pv(我是裸机使用的hostpath方式)与nfs进行绑定,所有使用nfs的storageclass的pvc锁动态创建的pv都会在这个pv下进行挂载。

而nfs-client是绑定到一个nfs服务,并创建storageclass。

我选择在一台硬盘空间比较大的云服务器上部署nfs服务

yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
mkdir -p /data/k8s
vim /etc/exports
## 加入 
/data/k8s *(rw,no_root_squash,sync)

#配置生效
exportfs -r
#查看生效
exportfs

执行部署

helm install storageclass-nfs stable/nfs-client-provisioner -f nfs-client.yml

nfs-client.yml

storageClass:
  name: nfs
  defaultClass: true
nfs:
  server: ******* ##自己服务器的ip
  path: /data/k8s  

over!!!!

创建es的chart

因为我用的helm3,helm2没用过,但是3特别的轻量,我在mac上直接安装了helm。 helm官网

先在本地使用命令(不使用helm上的模板的原因是想自己玩一下helm)

helm create elasticsearch

结构如下

values.yml

replicaCount: 3

image:
  repository: elasticsearch:7.5.2
  pullPolicy: IfNotPresent

ingress:
  host: es.xx.com
  name: es-xx


service:
  in:
    clusterIP: None
    port: 9300
    name: elasticsearch-in
  out:
    port: 9200
    name: elasticsearch-out
      

resources: 
  limits:
    cpu: 5
    memory: 5Gi
  requests:
    cpu: 1
    memory: 1Gi

deployment.yml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: {{ include "elasticsearch.fullname" . }}
  name: {{ include "elasticsearch.fullname" . }}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  serviceName: {{ include "elasticsearch.name" .}}
  selector:
    matchLabels:
      {{- include "elasticsearch.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "elasticsearch.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          ports:
            - containerPort: 9200
              name: es-http
            - containerPort: 9300
              name: es-transport
          volumeMounts:  ##挂载配置以及数据
            - name: es-data
              mountPath: /usr/share/elasticsearch/data
            - name: elasticsearch-config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              subPath: elasticsearch.yml 
      volumes:
        - name: elasticsearch-config
          configMap:
            name: {{ include "elasticsearch.name" .}}
            items:
              - key: elasticsearch.yml
                path: elasticsearch.yml    
  volumeClaimTemplates: #为每一个node都分配一个pvc
    - metadata:
        name: es-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        storageClassName: nfs     #动态创建pv的关键        

service.yml

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.out.name }}
  namespace: {{ include "elasticsearch.name" .}}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  ports:
    - port: {{ .Values.service.out.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "elasticsearch.selectorLabels" . | nindent 4 }}

---

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.in.name }}
  namespace: {{ include "elasticsearch.name" .}}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  clusterIP: {{.Values.service.in.clusterIP}}
  ports:
    - port: {{ .Values.service.in.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "elasticsearch.selectorLabels" . | nindent 4 }}

ingress.yml(如果es提供给集群内部使用这个可以不设置)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.ingress.name }}
  namespace: {{ include "elasticsearch.name" .}}
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
        - path: /
          backend:
            serviceName: {{ .Values.service.out.name }}
            servicePort: {{ .Values.service.out.port }}

检查helm包是否有明显错误(在本地)

helm lint elasticsearch

打包成压缩包

helm package elasticsearch

上传到服务器,我用的ftp工具 filezilla

然后是在集群中安装

helm install elasticsearch ./elasticsearch-0.1.0.tgz

查看是否安装成功

kubectl get pods -A

如果不成功可以使用一下命令查看具体信息

kubectl describe pod [pod] -n [namespace]

当然了有时候也不一定用describe就能查出来失败的原因 可以使用describe命令查出来执行到哪台node上,使用docker logs命令查看安装消息。

安装成功后使用ingress配置的域名访问即可(也可以在k8s集群内部使用)

安装kibana

安装了es,elk还是要尝试下的,不过我安装kibana之后也是出了点问题, 记录下。

helm create kibana

文档结构

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "kibana.fullname" . }}
  labels:
    {{- include "kibana.labels" . | nindent 4 }}
  namespace: {{ .Values.namespace.name}}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "kibana.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "kibana.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: {{ .Values.image.repository }}
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: "ELASTICSEARCH_HOSTS" #es k8s内部访问地址
              value: {{ .Values.elasticsearch.host }}
            - name: "I18N_LOCALE"   #汉化参数
              value: "zh-CN"
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      # 使用配置文件加configmap方式出了点问题,一直502,docker logs也查不到日志
      #     volumeMounts:
      #       - name: kibana-config
      #         mountPath: /usr/share/kibana/config/kibana.yml
      #         subPath: kibana.yml  
      # volumes: 
      #   - name: kibana-config
      #     configMap:
      #       name: {{ include "kibana.name" .}}
      #       items:
      #         - key: kibana.yml
      #           path: kibana.yml  

values.yml

replicaCount: 1

image:
  repository: kibana:7.5.2
  pullPolicy: IfNotPresent

namespace:
  name: elasticsearch #使用跟es同一个命名空间

service:
  port: 5601
  name: kibana
  
elasticsearch:
  host: http://elasticsearch-out:9200 #es内网访问地址,elasticsearch-out是es的访问svc

ingress:
  name: kibana-xx  
  host: kibana.xx.com


resources:
  limits:
    cpu: 1
    memory: 1Gi
  requests:
    cpu: 1
    memory: 512Mi

service.yml

apiVersion: v1
kind: Service
metadata:
  name: {{ include "kibana.fullname" . }}
  namespace: {{ .Values.namespace.name }}
  labels:
    {{- include "kibana.labels" . | nindent 4 }}
spec:
  ports:
    - port: {{ .Values.service.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "kibana.selectorLabels" . | nindent 4 }}

ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.ingress.name }}
  namespace: {{ .Values.namespace.name }}
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
        - path: /
          backend:
            serviceName: {{ include "kibana.fullname" . }}
            servicePort: {{ .Values.service.port }}

检查打包就不说了,直接去k8s上安装,安装完之后一直访问不通,我去docker里面查看日志,显示的log一直是waiting for elasticsearch,我以为是elasticsearch_hosts的配置有问题,换了很多次,没有用,我怀疑是es的安装有问题,于是写了个测试加了点数据。

 <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-client</artifactId>
            <version>7.3.1</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.3.1</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>7.3.1</version>
        </dependency>
try {
            RestHighLevelClient client = new RestHighLevelClient(
                    RestClient.builder(
                            new HttpHost("es.xx.com")));
            /*Map<String, Object> jsonMap = new HashMap<>();
            jsonMap.put("user", "zhanghua");
            jsonMap.put("postDate", new Date());
            jsonMap.put("message", "trying out Elasticsearch");
            IndexRequest indexRequest = new IndexRequest("posts")
                    .id("1").source(jsonMap);
            IndexResponse response=client.index(indexRequest,RequestOptions.DEFAULT);
            System.out.println(response);*/
            GetRequest getRequest=new GetRequest("posts","1");
            GetResponse getResponse=client.get(getRequest,RequestOptions.DEFAULT);
            System.out.println(getResponse);
            client.close();
        } catch (IOException e){
            e.printStackTrace();
        }

返回数据是

{"_index":"posts","_type":"_doc","_id":"1","_version":1,"_seq_no":0,"_primary_term":1,"found":true,"_source":{"postDate":"2020-01-29T06:26:51.136Z","message":"trying out Elasticsearch","user":"zhanghua"}}

显然es没问题,我这时候再去访问kibana突然就好了,这个真不知道原因是什么,可能是es没初始化数据得原因?还需要查一下

访问kibana的界面