ElasticSearch之安装上手

345 阅读4分钟

ElasticSearch的安装与简单配置

拉取镜像

docker pull elasticsearch:7.16.2

查看镜像

elasticsearch                      7.16.2                         e082d8ac7e5e   10 months ago   634MB

运行镜像

docker run -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -e "discovery.type=single-node" -d -p 9200:9200 -p 9300:9300 --name elasticsearch e082d8ac7e5e
4bb3ee64e3dcf1f96ae3890e8f9897f7c5ff8ede63a065d60e342673232de8e8

-e:指定容器内的环境变量 -d:后台运行容器,并返回容器ID -p:指定端口映射,宿主:容器端口。9200是ES节点与外部通讯使用的端口。9300是es节点之间通讯使用的端口。 --name:表示容器名称

访问http://localhost:9200/

{
  "name" : "4bb3ee64e3dc",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "_Vj8K3TSTZm_oqiyuZicLg",
  "version" : {
    "number" : "7.16.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2b937c44140b6559905130a8650c64dbd0879cfb",
    "build_date" : "2021-12-18T19:42:46.604893745Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

进入容器

docker exec -it elasticsearch  bash

查看目录

LICENSE.txt  NOTICE.txt  README.asciidoc  bin  config  data  jdk  lib  logs  modules  plugins
  • bin:脚本文件,包括启动elasticsearch,安装插件。运行统计数据等
  • config:elasticsearch.yml,集群配置文件,user,role based相关配置
  • jdk:java运行环境
  • data:数据文件
  • lib:java类库
  • logs:日志文件
  • modules:包含所有es模块
  • plugins:包含所有已安装插件

下载插件

看看有啥插件:

root@4bb3ee64e3dc:/usr/share/elasticsearch# bin/elasticsearch-plugin list

默认啥也没有。

下面安装下analysis-icu:

root@4bb3ee64e3dc:/usr/share/elasticsearch# bin/elasticsearch-plugin install analysis-icu

也可以通过api查看http://localhost:9200/_cat/plugins

4bb3ee64e3dc analysis-icu 7.16.2

多实例

先看看上面的nodes http://localhost:9200/_cat/nodes :

172.17.0.7 69 47 2 0.06 0.23 0.15 cdfhilmrstw * 4bb3ee64e3dc

使用docker compose:

version: '3'
services:
  es01:
    image: elasticsearch:7.16.2
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      # 设置虚拟机内存
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      # 使用系统内存 无限制
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  es02:
    image: elasticsearch:7.16.2
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic
  es03:
    image: elasticsearch:7.16.2
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic

volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

networks:
  elastic:
    driver: bridge

这里的volumes,可以看下stackoverflow中的回答,相当于

docker volume create --driver local --name esdata1
docker volume create --driver local --name esdata2

就是在docker主机上,可以使用docker volume ls:

DRIVER    VOLUME NAME
local     es-cluster_data01
local     es-cluster_data02
local     es-cluster_data03

启动使用docker-compose up或者后台运行docker-compose up -d,再使用docker-compose ps查看:

Name              Command               State                Ports              
--------------------------------------------------------------------------------
es01   /bin/tini -- /usr/local/bi ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
es02   /bin/tini -- /usr/local/bi ...   Up      9200/tcp, 9300/tcp              
es03   /bin/tini -- /usr/local/bi ...   Up      9200/tcp, 9300/tcp   

使用es接口http://localhost:9200/_cat/nodes?v=true 查看:

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
172.21.0.4           67          71   2    0.13    0.23     0.17 cdfhilmrstw *      es02
172.21.0.3           35          71   2    0.13    0.23     0.17 cdfhilmrstw -      es01
172.21.0.2           68          71   2    0.13    0.23     0.17 cdfhilmrstw -      es03

Kibana的安装与节目快速浏览

数据可视化分析利器,帮助用户解开对数据的任何疑问。 在docker compose文件中增加kibana:

kibana:
  image: docker.elastic.co/kibana/kibana:7.1.0
  container_name: kibana7
  environment:
    - I18N_LOCALE=zh-CN
    - XPACK_GRAPH_ENABLED=true
    - TIMELION_ENABLED=true
    - XPACK_MONITORING_COLLECTION_ENABLED="true"
  ports:
    - "5601:5601"
  networks:
    - elastic
  links:
    - es01:elasticsearch

注意:这里的links用于给es01取别名,因为kibana的默认配置如下:

image.png

所以需要把es01取别名为elasticsearch。

启动后访问http://localhost:5601/

image.png

  • 开发者工具: image.png 方便以后使用ElasticSearch的api,不懂可以看帮助:

image.png

cerebro的安装和使用

cerebro基于Elasticsearch Web可视化管理工具。您可以通过Cerebro对集群进行web可视化管理,如执行rest请求、修改Elasticsearch配置、监控实时的磁盘,集群负载,内存使用率等。

docker compose文件中增加:

cerebro:
  image: lmenezes/cerebro:0.8.3
  container_name: cerebro
  ports:
    - "9000:9000"
  command:
    - -Dhosts.0.host=http://elasticsearch:9200
  networks:
    - elastic
  links:
    - es01:elasticsearch

启动后访问http://localhost:9000/

image.png

image.png

Logstash的安装和使用

开源的服务端数据处理管道,支持从不同来源采集数据,转换数据,并将数据发送到不同的存储库中

  1. 下载数据
  2. docker compose文件中增加:
logstash:
  image: docker.elastic.co/logstash/logstash:7.16.2
  container_name: logstash7
  volumes:
    - /Users/wanghaifeng/Documents/es-cluster/logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:rw
    - /Users/wanghaifeng/Documents/es-cluster/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
    - /Users/wanghaifeng/Documents/es-cluster/logstash/data/movies.csv:/usr/share/logstash/data/movies.csv
  networks:
    - elastic
  links:
    - es01:elasticsearch
  1. logstash.conf
http.host: "0.0.0.0"
  1. logstash.conf
input {
  file {
    path => "/usr/share/logstash/data/movies.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
filter {
  csv {
    separator => ","
    columns => ["id","content","genre"]
  }

  mutate {
    split => {"genre" => "|"}
    remove_field => ["path","host","@timestamp","message"]
  }

  mutate {
    split => ["content", "("]
    add_field => { "title" => "%{[content][0]}"}
    add_field => { "year" => "%{[content][1]}"}
  }
  mutate {
    convert => {
        "year" => "integer"
    }
    strip => ["title"]
    remove_field => ["path", "host", "@timestamp","message","content"]
  }
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "movies"
    document_id => "%{id}"
  }
  stdout {}
}
  1. 运行,可以看到数据会导入到elasticSearch:

image.png

参考

五分钟搞定Docker安装ElasticSearch

Docker 搭建Es集群

使用 Docker 快速部署 Elasticsearch 集群

Elasticsearch 核心技术与实战

开启掘金成长之旅!这是我参与「掘金日新计划 · 2 月更文挑战」的第 2 天,点击查看活动详情