Elasticsearch原理学习(二)Elasticsearch在centos搭建

427 阅读7分钟

我正在参加中秋创意投稿大赛,详情请看:中秋创意投稿大赛 。

一、单机安装

1.1 elasticsearch和kibana

es和kibana常规安装有三种方式:yum,rpm,tar.gz 其中yum需要在开放外网的情况下,内网环境可以使用后面两种方式。

1.1.1 es安装

我们本次使用rpm的安装方式:

image.png

下载rpm文件并上传至服务器,通过如下命令安装:

[root@localhost opt]# rpm --install elasticsearch-7.12.0-x86_64.rpm 
警告:elasticsearch-7.12.0-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
Creating elasticsearch group... OK
Creating elasticsearch user... OK
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
Future versions of Elasticsearch will require Java 11; your Java version from [/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.282.b08-1.el7_9.x86_64/jre] does not meet this requirement. Consider switching to a distribution of Elasticsearch with a bundled JDK. If you are already using a distribution with a bundled JDK, ensure the JAVA_HOME environment variable is not set.
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore

使用以下命令去启动es:

systemctl start elasticsearch

使用以下命令去查看es状态:

[root@localhost opt]# systemctl stats elasticsearch
Unknown operation 'stats'.
[root@localhost opt]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: active (running) since 六 2021-04-10 16:40:42 CST; 2min 7s ago
     Docs: https://www.elastic.co
 Main PID: 35470 (java)
    Tasks: 56
   Memory: 2.1G
   CGroup: /system.slice/elasticsearch.service
           ├─35470 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitSt...
           └─35769 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

410 16:40:23 localhost.localdomain systemd[1]: Starting Elasticsearch...
410 16:40:42 localhost.localdomain systemd[1]: Started Elasticsearch.

以下命令去停止:

systemctl stop elasticsearch

浏览器发现无法访问,请注意防火墙是否开放了9200端口

[root@localhost opt]# firewall-cmd --list-ports
9092/tcp 2181/tcp 2888/tcp 3888/tcp

添加防火墙策略并重新加载:

[root@localhost opt]# firewall-cmd --zone=public --add-port=9200/tcp --permanent
success
[root@localhost opt]# systemctl reload firewalld
[root@localhost opt]# firewall-cmd --list-ports
9092/tcp 2181/tcp 2888/tcp 3888/tcp 9200/tcp

浏览器访问发现仍然失败:

image.png

因为需要对es开放外网访问权限,编辑配置文件:

[root@localhost opt]# vi /etc/elasticsearch/elasticsearch.yml 

image.png

重新启动,发现启动失败了

image.png

错误如下:

bootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

主要原因是因为我们没有指定master节点,服务会一直去找主节点,我们在配置文件中进行配置就好了。

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application

# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1

# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]

再次重启,并通过浏览器访问ip:9200端口,终于可以访问了,返回如下:

image.png

1.1.2 kibana安装

kibana我们同样使用rpm的安装方式:

image.png 上传至服务器并使用以下命令安装:

[root@localhost opt]# rpm --install kibana-7.12.0-x86_64.rpm 
警告:kibana-7.12.0-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
Creating kibana group... OK
Creating kibana user... OK
Created Kibana keystore in /etc/kibana/kibana.keystore

启动服务,并开放防火墙默认端口5601:

[root@localhost opt]# systemctl start kibana
[root@localhost opt]# firewall-cmd --zone=public --add-port=5601/tcp --permanent
success
[root@localhost opt]# systemctl reload firewalld
[root@localhost opt]# firewall-cmd --list-ports
9092/tcp 2181/tcp 2888/tcp 3888/tcp 9200/tcp 5601/tcp

开放kibana的外网访问:

vi /etc/kibana/kibana.yml
# 开发外网访问权限
server.host: 192.168.184.134
# es集群的地址
elasticsearch.hosts: ["http://192.168.184.134:9200","http://192.168.184.135:9200","http://192.168.184.136:9200"]

重启kibana:

[root@localhost opt]# systemctl restart kibana

结果:

image.png

1.2 elasticsearch的ik分词器

我们使用es的的全文检索功能,就需要用到中文分词器,这里我们选择elasticsearch-analysis-ik-7.12.0,同样是对应版本的。 在github上进行下载:github.com/medcl/elast…

首先进入elasticsearchd的安装目录:

[root@localhost /]# cd /usr/share/elasticsearch/
[root@localhost elasticsearch]# ll
总用量 560
drwxr-xr-x.  2 root root   4096 4  10 16:39 bin
drwxr-xr-x.  9 root root    107 4  10 16:39 jdk
drwxr-xr-x.  3 root root   4096 4  10 16:39 lib
-rw-r--r--.  1 root root   3860 3  18 14:15 LICENSE.txt
drwxr-xr-x. 61 root root   4096 4  10 16:39 modules
-rw-rw-r--.  1 root root 545323 3  18 14:19 NOTICE.txt
drwxr-xr-x.  2 root root      6 3  18 14:30 plugins
-rw-r--r--.  1 root root   7263 3  18 14:14 README.asciidoc

进入plugins,并创建一个ik目录,进入ik目录,拷贝ik的zip包到当前目录,解压,删除zip包,查看当前目录,操作如下:

[root@localhost elasticsearch]# cd plugins/
[root@localhost plugins]# mkdir ik
[root@localhost plugins]# cp /opt/elasticsearch-
elasticsearch-7.12.0-x86_64.rpm       elasticsearch-analysis-ik-7.12.0.zip  elasticsearch-head-5.0.0.zip          
[root@localhost plugins]# cp /opt/elasticsearch-analysis-ik-7.12.0.zip ./
[root@localhost plugins]# cd ik
[root@localhost ik]# cp /opt/elasticsearch-analysis-ik-7.12.0.zip ./
[root@localhost ik]# ll
总用量 4400
-rw-r--r--. 1 root root 4504461 4  10 17:39 elasticsearch-analysis-ik-7.12.0.zip
[root@localhost ik]# unzip elasticsearch-analysis-ik-7.12.0.zip 
Archive:  elasticsearch-analysis-ik-7.12.0.zip
  inflating: elasticsearch-analysis-ik-7.12.0.jar  
  inflating: httpclient-4.5.2.jar    
  inflating: httpcore-4.4.4.jar      
  inflating: commons-logging-1.2.jar  
  inflating: commons-codec-1.9.jar   
   creating: config/
  inflating: config/main.dic         
  inflating: config/quantifier.dic   
  inflating: config/extra_single_word_full.dic  
  inflating: config/IKAnalyzer.cfg.xml  
  inflating: config/surname.dic      
  inflating: config/suffix.dic       
  inflating: config/stopword.dic     
  inflating: config/extra_main.dic   
  inflating: config/extra_stopword.dic  
  inflating: config/preposition.dic  
  inflating: config/extra_single_word_low_freq.dic  
  inflating: config/extra_single_word.dic  
  inflating: plugin-descriptor.properties  
  inflating: plugin-security.policy  
[root@localhost ik]# rm -rf elasticsearch-analysis-ik-7.12.0.zip 
[root@localhost ik]# ll
总用量 1432
-rw-r--r--. 1 root root 263965 5   6 2018 commons-codec-1.9.jar
-rw-r--r--. 1 root root  61829 5   6 2018 commons-logging-1.2.jar
drwxr-xr-x. 2 root root   4096 12 25 2019 config
-rw-r--r--. 1 root root  54625 3  29 23:08 elasticsearch-analysis-ik-7.12.0.jar
-rw-r--r--. 1 root root 736658 5   6 2018 httpclient-4.5.2.jar
-rw-r--r--. 1 root root 326724 5   6 2018 httpcore-4.4.4.jar
-rw-r--r--. 1 root root   1807 3  29 23:08 plugin-descriptor.properties
-rw-r--r--. 1 root root    125 3  29 23:08 plugin-security.policy

下一步重新启动es:

[root@localhost ik]# systemctl restart elasticsearch

通过日志查看插件加载情况,这里开启了集群的配置,所以我们需要查看集群的日志my-application.log:

[root@localhost elasticsearch]# cat /var/log/elasticsearch/my-application.log 

image.png

分词器已经成功加载。

1.3 elasticsearch-head

这个插件是一个前端工程,主要监控es的集群状态: 在github下载源码:github.com/mobz/elasti… 服务器能来凝结git的话可以直接clone,也可以下载zip包。本文直接下载的zip包。 这个服务就直接在windows安装了: 解压文件,进入文件,然后使用以下命令安装依赖,需要安装node,没有的自行安装:

npm install

启动:

PS E:\javasoft\es-7.12\elasticsearch-head-5.0.0> npm run start

> elasticsearch-head@0.0.0 start E:\javasoft\es-7.12\elasticsearch-head-5.0.0
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

访问http://localhost:9100

image.png

如上发现es没有连接上,这里需要修改es的配置,来解决这个跨域问题。

修改es配置文件,添加以下内容:

[root@localhost opt]# vi /etc/elasticsearch/elasticsearch.yml

#追加内容
http.cors.enabled: true
http.cors.allow-origin: "*"

再次访问页面,发现安装成功了:

image.png

二、添加集群

在上面的head里面我们看到只有一个节点,下面添加两个节点,安装方式与之前的es安装没有区别,配置文件存在一点不同。

2.1 配置文件

我们只需上传es安装文件和ik文件。 来看下整体的一个集群配置文件,多余注释删除了:

# ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
# 集群名称
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
# 当前节点的名称
node.name: node-0
# ----------------------------------- Paths ------------------------------------
# 数据存储路径
path.data: /var/lib/elasticsearch
# 日志文件路径
path.logs: /var/log/elasticsearch
# ---------------------------------- Network -----------------------------------
#开放外网访问
network.host: 192.168.184.134
# 主机访问端口
http.port: 9200
# --------------------------------- Discovery ----------------------------------
# es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
# es7之后,不需要discover.zen.ping.unicast.hosts这个参数,用discovery.seed_hosts替换
discovery.seed_hosts: ["192.168.184.134", "192.168.184.135","192.168.184.136"]
#
## es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["node-0","node-1","node-2"]

#当前节点是否能成为主节点
node.master: true

# 当前节点是否用于存储数据,是:true、否:false
node.data: true

# 是否支持跨域,是:true,在使用head插件时需要此配置
http.cors.enabled: true
 
# "*" 表示支持所有域名
http.cors.allow-origin: "*"

注意: 在配置集群过程中,重启各个节点之前请清空每个节点的data,否则会导致启动失败。 同时注意kibana也要重启。

2.2 es-head的配置

修改elasticsearch-head-5.0.0_site\app.js的配置 将原本的localhost替换成集群的任意一个ip端口,我这里配了三个

image.png

修改Gruntfile.js文件,增加hostname

		connect: {
			server: {
				options: {
					port: 9100,
					hostname: '0.0.0.0',
					base: '.',
					keepalive: true
				}
			}
		}

启动后查看localhost:9100

image.png

2.3 使用kibana查看es的状态

访问http://192.168.184.134:5601/,找到Dev-Tools

image.png

查看集群健康状态:

GET /_cat/health?v

image.png

创建索引test,三个分片,一个备份:

PUT /test/
{
  "settings":{
    "index":{
      "number_of_shards" : "3",
      "number_of_replicas" : "1"
    }
  }
}

查看head:

image.png

至此,集群搭建成功了。