这是我参与更文挑战的第11天,活动详情查看: 更文挑战
搭建es集群
下载
这里下载的7.11.1
cd /usr/local
mkdir es
cd es
mkdir logs
mkdir data
yum install perl-Digest-SHA
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.11.1-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.11.1-linux-x86_64.tar.gz.sha512
shasum -a 512 -c elasticsearch-7.11.1-linux-x86_64.tar.gz.sha512
tar -xzf elasticsearch-7.11.1-linux-x86_64.tar.gz
cd elasticsearch-7.11.1/
添加es用户
这个必须添加,因为es是不能用root用户的权限启动的。
useradd -u 80 es
passwd es
chown -R es:es /usr/local/es
su - es
配置服务器1的es
这里的cluster.name指的是集群的名字,node.name指的是节点的名字,path.data数据的存储路径,path.logs日志的存储路径,publish_host必须得配置,不配置无法和其他节点建立通道,他是其他集群组件进行通信的地方,seed_hosts指定集群的节点信息,initial_master_nodes初始的主节点,而http.cors.enabled和http.cors.allow-origin是根据跨域有关的配置,不配置的话,无法正常使用es-head插件。
cluster.name: myes
node.name: es1
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0.0.0
http.port: 9200
network.publish_host: 192.168.0.120
discovery.seed_hosts: ["192.168.0.120","192.168.0.121","192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
配置服务器2的es
cluster.name: myes
node.name: es2
path.data:/usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0.0.0
http.port: 9200
network.publish_host: 192.168.0.121
discovery.seed_hosts: ["192.168.0.120","192.168.0.121","192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
配置服务器3的es
cluster.name: myes
node.name: es3
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0.0.0
http.port: 9200
network.publish_host: 192.168.0.122
discovery.seed_hosts: ["192.168.0.120","192.168.0.121","192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
修改文件的权限
将文件夹下的所有文件的权限授予es用户。
chown -R es:es /usr/local/es
配置服务器
不配置之前出现错误
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [655360]
解决方法:
修改/etc/sysctl.conf文件,增加vm.max_map_count=655360保存后,执行sysctl -p ,修改生效。
启动
在bin目录下./elasticsearch -d逐渐启动三台服务器,-d的意思是后台启动。
检查集群是否成功
http://IP:9200/_cluster/health?pretty
我测试的集群名叫es-dev,这里应该是myes
开启x-pack,使用ssl认证
开启之后,使用密码才能访问es,否则无法直接访问。
./bin/elasticsearch-certutil cert
生成的文件elastic-certificates.p12,将其放在三台服务器下的config文件夹下,之后执行
./bin/elasticsearch-keystore create
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
修改es的配置文件加上
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
完整的yml文件将变成
cluster.name: myes
node.name: es1
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0.0.0
http.port: 9200
network.publish_host: 192.168.0.120
discovery.seed_hosts: ["192.168.0.120","192.168.0.121","192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
重新启动重新验证集群
问题重现
之前生成elastic-certificates.p12文件之后,未执行
./bin/elasticsearch-keystore create
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
启动出现问题
uncaught exception in thread [main]
ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[keystore password was incorrect]; nested: UnrecoverableKeyException[failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.];
Likely root cause: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2103)
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:220)
at java.base/java.security.KeyStore.load(KeyStore.java:1472)
at org.elasticsearch.xpack.core.ssl.TrustConfig.getStore(TrustConfig.java:98)
at org.elasticsearch.xpack.core.ssl.StoreTrustConfig.createTrustManager(StoreTrustConfig.java:66)
at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:438)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1224)
at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$5(SSLService.java:527)
at java.base/java.util.HashMap.forEach(HashMap.java:1425)
at java.base/java.util.Collections$UnmodifiableMap.forEach(Collections.java:1521)
at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:525)
at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:143)
at org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:458)
at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:290)
at org.elasticsearch.node.Node.lambda$new$16(Node.java:560)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.elasticsearch.node.Node.<init>(Node.java:564)
at org.elasticsearch.node.Node.<init>(Node.java:278)
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:216)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:216)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:387)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
<<<truncated>>>
For complete error details, refer to the log at /data/es/logs/es-dev.log
执行了上述命令,即可正常。
安装kibana
kibana可以执行一些es接口,操作es,也可以做可视化,使用方便。
下载安装
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.11.1-linux-x86_64.tar.gz
tar -xzf kibana-7.11.1-linux-x86_64.tar.gz
cd kibana-7.11.1-linux-x86_64/
修改配置文件
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.0.120:9200","http://192.168.0.121:9200","http://192.168.0.122:9200"]
如果es开启了x-pack需要配置
elasticsearch.username: "kibana"
elasticsearch.password: "password"
启动
chown -R es:es /usr/local/es
./kibana >/dev/null 2>&1 &
安装es-head
es-head可以轻松的通过界面,查看集群的信息,以及文档数据。切记配置es跨域,否则无法正常使用。
yum module install nodejs/development
yum install git
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
chown -R es:es /usr/local/es
配置
修改es-head目录下的/Gruntfile.js,增加hostname属性,也可以自行更改监听地址
可以一劳永逸,默认打开之后就是集群的地址。
启动
npm run start