这篇文章主要介绍使用docker-compose快速搭建ibuy-portal-backend的开发环境
相关的前端项目为ibuy-portal
1. 开发环境配置
1.1. .env.dev配置
JWT_SECRET=xxx
JWT_EXPIRES_IN=xxx
# elasticsearch dev enviroment
ES_NODE=http://localhost:9200
ELASTIC_USERNAME=elastic
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=123456
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=localhost:9200
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=123456
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Version of Elastic products
STACK_VERSION=8.14.2
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Increase or decrease based on the available host memory (in bytes)
# 1GB
MEM_LIMIT=1073741824
# MEM_LIMIT=268435456
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
#postgre
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=xxx
POSTGRES_DATABASE=mall
# redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=xxx
# minio
MINIO_HOST=localhost
MINIO_PORT=9000
MINIO_ROOT_USER=minio
MINIO_ROOT_PASSWORD=xxx
MINIO_ACCESS_KEY=xxx
MINIO_SECRET_KEY=xxx
# rabbitmq
RMQ_HOST=localhost
RMQ_PORT=5672
RMQ_USERNAME=xxx
RMQ_PASSWORD=xxx
RMQ_VIRTUAL_HOST=/ibuy
# 支付相关
# 商户id
ALIPAY_APP_ID: xxx
# 商户秘钥
ALIPAY_MERCHANT_PRIVATE_KEY: xxx
# 支付宝公钥
ALIPAY_PUBLIC_KEY: xxx
#支付状态异步通知
ALIPAY_NOTIFY_URL: xxx
#支付状态同步通知
ALIPAY_RETURN_URL: xxx
#支付宝开发环境网关 用的沙箱的网关
ALIPAY_GATEWAY_URL: xxx
ALIPAY_SIGN_TYPE: RSA2
#ALIPAY_CHARSET: utf-8
#ALIPAY_FORMAT: json
1.2. 配置docker-compose.dev.yml
这个文件是使用docker-compose统一管理开发过程中使用到的各种服务
version: "2.5"
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 {} ;;
find . -type f -exec chmod 640 {} ;;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
1.3. 启动相关容器
在项目的根目录下运行下面的命令
docker-compose -f ./docker-compose.dev.yml --env-file .env.dev up -build -d
-f:指定 Docker Compose 配置文件。--env-file:指定环境变量文件。up:启动服务。--build:参数用于在启动服务之前重新构建 Docker 镜像。如果你修改了 Dockerfile 或依赖文件(如package.json),使用--build可以确保使用最新的代码和依赖构建镜像-d:表示以“分离模式”(detached mode)运行容器, 在后台运行容器。
1.4. 配置各个服务的相关设置
1.4.1. Elasticsearch和kibana
1.4.1.1. 将 http_ca.crt SSL 证书从容器复制到您的本地计算机。
等Elasticsearch启动后,在项目根目录运行以下命令
docker cp ibuy-portal-backend-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt .
1.4.1.2. 向 Elasticsearch 发出 REST API 调用,以确保 Elasticsearch 容器正在运行。
curl --cacert ca.crt -u elastic:[ELASTIC_PASSWORD] https://localhost:9200
这里的 [ELASTIC_PASSWORD]要替换成.env文件中设置的密码
如果一切正常,你会看到如下内容:
{
"name" : "es01",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "t5DlPYalRAm_tceqPUo7gw",
"version" : {
"number" : "8.14.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2afe7caceec8a26ff53817e5ed88235e90592a1b",
"build_date" : "2024-07-01T22:06:58.515911606Z",
"build_snapshot" : false,
"lucene_version" : "9.10.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
1.4.2. 配置minio相关设置
我们已经统一在docker-compose.dev.yml中配置了服务的启动方式,在看文章时可以跳过手动用docker部署的环节。
1.4.3. rabbitmq相关设置
1.4.3.1. 基于TTL + 死信队列实现延迟队列
在完成上述配置后,就可以通过http://localhost:15672/#/queues管理你的rabbitmq。
我们先看一下rabbitqm实现延迟队列的具体流程图。
我们的延迟队列用于处理用户提交订单后,超时未支付的情况
1.4.3.1.1. 创建queue.order.check消费队列(图中已有的队列是所有队列创建完后的样子)
1.4.3.1.2. 创建死信交换机exchange.order.delay
切换到Exchanges的tab创建一个死信交换机exchange.order.delay,type选择direct
然后点进exchange.order.delay交换机,并绑定消费队列queue.order.check,并设置routing key 为queue.order.check
到这,可以先publish一条message测试一下queue.order.check是否能收到信息。
1.4.3.1.3. 创建死信队列queue.order.delay
这里,我们将死信队列的消息通过配置参数发送到了queue.order.check消费队列中。
x-message-ttl=10000 ,这个参数最好是在代码中配置。
最后就是在死信交换机exchange.order.delay中绑定queue.order.delay
1.4.3.2. 创建支付队列
这里的队列使用主要有以下两点原因
- 异步处理支付结果、解耦支付处理流程,以及通过消息队列保证消息的可靠传递,防止消息丢失。
- 削峰填谷:在订单系统中,下单的时候直接往数据库中写数据时,只能支撑每秒1000左右的并发写入,并发量再高就容易宕机。如果在高峰期时候,并发量突然激增到1000以上,或者更多,这个时候数据库就会可能卡死了,所以通过增加mq这个中间件来对写入数据进行缓存。
1.4.3.2.1. 创建一个支付队列queue.order.pay
不用加任何arguments
1.4.3.2.2. 创建支付队列交换机exchange.order.pay
点进去再绑定到支付队列queue.order.pay
1.4.3.3. 其他问题
rabbitmq相关的配置我们在docker-compose文件中已经配置好了,如果有问题,可以试着手动执行下面的命令
1.4.3.3.1. 等待服务器启动,然后启用流和流管理插件:
docker exec rabbitmq rabbitmq-plugins enable rabbitmq_stream rabbitmq_stream_management
1.4.3.3.2. 以root权限进入docker
docker exec -u root -it rabbitmq /bin/bash
1.4.3.3.3. 添加一个用户
rabbitmqctl add_user [username] [password]
1.4.3.3.4. 创建一个Virtualhost
rabbitmqctl add_vhost /ibuy
1.4.3.3.5. 给用户访问权限
rabbitmqctl set_permissions -p /ibuy [username] ".*" ".*" ".*"
1.4.3.3.6. 检查权限
如果你已经有用户,但提示没有管理权限,你可以使用以下命令查看用户权限:
rabbitmqctl list_users
该命令会列出所有用户及其角色。确保你登录的用户拥有 administrator 标签。如果没有,则可以使用以下命令为用户分配管理权限
rabbitmqctl set_user_tags [username] administrator
1.5. 本地运行项目代码
yarn run start:dev
在运行上面的命令后,如果你的所有配置都没问题,那么就可以访问localhost:8000来进行开发了
2. 生产环境配置
2.1. .env文件配置
JWT_SECRET=xxx
JWT_EXPIRES_IN=xxx
# elasticsearch
ES_NODE=https://es01:9200
ELASTIC_USERNAME=elastic
ELASTIC_PASSWORD=123456
ES_PORT=9200
# kibana
KIBANA_PASSWORD=123456
KIBANA_PORT=5601
# elastic stack
STACK_VERSION=8.14.2
CLUSTER_NAME=docker-cluster
LICENSE=basic
# 1GB
MEM_LIMIT=1073741824
# postgresql
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=xxx
POSTGRES_DATABASE=mall
# redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=xxx
# minio
MINIO_HOST=minio
MINIO_PORT=9000
MINIO_ROOT_USER=minio
MINIO_ROOT_PASSWORD=xxx
MINIO_ACCESS_KEY=xxx
MINIO_SECRET_KEY=xxx
# rabbitmq
RMQ_HOST=rabbitmq
RMQ_PORT=5672
RMQ_USERNAME=xxx
RMQ_PASSWORD=xxx
RMQ_VIRTUAL_HOST=/ibuy
# 支付相关
# 商户id
ALIPAY_APP_ID: xxx
# 商户秘钥
ALIPAY_MERCHANT_PRIVATE_KEY: xxx
# 支付宝公钥
ALIPAY_PUBLIC_KEY: xxx
#支付状态异步通知
ALIPAY_NOTIFY_URL: xxx
#支付状态同步通知
ALIPAY_RETURN_URL: xxx
#支付宝开发环境网关 用的沙箱的网关
ALIPAY_GATEWAY_URL: zzz
#签名方式
ALIPAY_SIGN_TYPE: RSA2
#ALIPAY_CHARSET: utf-8
#ALIPAY_FORMAT: json
由于我们是用docker部署的项目代码, 所以和env.dev这个开发环境不同的是,在项目中连接其他容器的地址时,我们需要将url中的host部分替换为docker所管理的服务名。并且我们网关相应的url也需要改成自己项目部署的ip
2.2. 配置docker-compose.yml
version: "2.5"
services:
# Elasticsearch setup service for generating certificates
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 {} ;;
find . -type f -exec chmod 640 {} ;;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
# Elasticsearch main node
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: ibuy-service-es01
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
# - /usr/local/docker-volumes/es01/config:/usr/share/elasticsearch/config
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
# Kibana service
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
container_name: ibuy-service-kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
# FATAL CLI ERROR Error: ENOENT: no such file or directory, open '/usr/share/kibana/config/kibana.yml'
# - /usr/local/docker-volumes/kibana/config:/usr/share/kibana/config
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
# PostgreSQL service
postgres:
image: postgres:17-alpine
container_name: ibuy-service-postgres
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DATABASE}
volumes:
- /usr/local/docker-volumes/postgre:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
# Redis service
redis:
image: redis:7.2-alpine
container_name: ibuy-service-redis
ports:
- "${REDIS_PORT}:6379"
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
command: redis-server --requirepass "${REDIS_PASSWORD}" --appendonly yes
volumes:
- /usr/local/docker-volumes/redis:/data
restart: always
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
# MinIO service
minio:
image: minio/minio
container_name: ibuy-service-minio
ports:
- "${MINIO_PORT}:9000"
- "9090:9090"
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
command: server /data --console-address ":9090" --address ":9000"
volumes:
- /usr/local/docker-volumes/minio/data:/data
- /usr/local/docker-volumes/minio/config:/root/.minio
restart: always
healthcheck:
test: ["CMD-SHELL", "curl -f http://127.0.0.1:9000/minio/health/live || exit 1"]
interval: 10s
timeout: 5s
retries: 5
# RabbitMQ service
rabbitmq:
image: rabbitmq:4.0
container_name: ibuy-service-rabbitmq
ports:
- "${RMQ_PORT}:5672"
- "15672:15672"
environment:
- RABBITMQ_DEFAULT_USER=${RMQ_USERNAME}
- RABBITMQ_DEFAULT_PASS=${RMQ_PASSWORD}
- RABBITMQ_DEFAULT_VHOST=${RMQ_VIRTUAL_HOST}
# 启用流式队列和webui和命令行管理工具
command: >
bash -c "
rabbitmq-plugins enable rabbitmq_stream rabbitmq_stream_management &&
rabbitmq-server
"
volumes:
- /usr/local/docker-volumes/rabbitmq:/var/lib/rabbitmq
restart: always
healthcheck:
test: ["CMD-SHELL", "rabbitmqctl status > /dev/null 2>&1 || exit 1"]
interval: 10s
timeout: 5s
retries: 10
# NestJS service
nestjs:
build:
context: .
dockerfile: Dockerfile
container_name: ibuy-service-nestjs
ports:
- "8000:8000"
volumes:
- /usr/local/docker-volumes/app/logs:/app/logs
environment:
- NODE_ENV=production
- POSTGRES_HOST=ibuy-service-postgres
- POSTGRES_PORT=${POSTGRES_PORT}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DATABASE}
- REDIS_HOST=ibuy-service-redis
- REDIS_PORT=${REDIS_PORT}
- REDIS_PASSWORD=${REDIS_PASSWORD}
- MINIO_HOST=ibuy-service-minio
- MINIO_PORT=${MINIO_PORT}
- RMQ_HOST=ibuy-service-rabbitmq
- RMQ_PORT=${RMQ_PORT}
- RMQ_USERNAME=${RMQ_USERNAME}
- RMQ_PASSWORD=${RMQ_PASSWORD}
- ALIPAY_APP_ID=${ALIPAY_APP_ID}
- ALIPAY_MERCHANT_PRIVATE_KEY=${ALIPAY_MERCHANT_PRIVATE_KEY}
- ALIPAY_PUBLIC_KEY=${ALIPAY_PUBLIC_KEY}
- ALIPAY_NOTIFY_URL=${ALIPAY_NOTIFY_URL}
- ALIPAY_RETURN_URL=${ALIPAY_RETURN_URL}
- ALIPAY_GATEWAY_URL=${ALIPAY_GATEWAY_URL}
- ALIPAY_SIGN_TYPE=${ALIPAY_SIGN_TYPE}
- JWT_SECRET=${JWT_SECRET}
- JWT_EXPIRES_IN=${JWT_EXPIRES_IN}
- ES_NODE=${ES_NODE}
- ELASTIC_USERNAME=${ELASTIC_USERNAME}
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
# 注意depends_on是服务名,而不是容器名
depends_on:
# - postgres
# - redis
# - minio
# - rabbitmq
# - es01
rabbitmq:
condition: service_healthy # 确保在 rabbitmq 健康后启动
postgres:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
es01:
condition: service_healthy
restart: always
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
postgre:
driver: local
redis:
driver: local
minio:
driver: local
rabbitmq:
driver: local
2.3. 启动相关容器
docker-compose up --build -d
2.4. 配置个服务相关配置
同上
2.5. 确保服务器的防火墙相应的服务端口开放
根据自己的云服务厂商来配置
3. 问题其他问题
3.1. failed to authenticate user [kibana_system]
在启动elastic以及kibana后如果遇到401的错误,那可能就是kibana_system用户的密码设置的有问题,正常在停掉容器后再删除掉对应的volumn就可以了,但如果还出错,可以尝试使用以下的方法
3.1.1. 确认 .env 文件被正确加载
确保 docker-compose 在启动时正确加载了 .env 文件,并将其中的变量注入到服务中。
检查加载 .env 文件:
运行以下命令,查看 docker-compose 是否正确读取了 .env 文件中的环境变量:
docker-compose config
在输出中,检查以下变量是否和你.env中的配置一样:
ELASTIC_PASSWORDKIBANA_PASSWORD
如果这些变量的值为空或不正确,说明 .env 文件没有正确加载。请确认以下内容:
.env文件和docker-compose.yml文件位于同一目录。.env文件名正确(没有额外的扩展名,例如.env.txt)。- 文件内容中没有多余的空格或隐藏字符。
3.1.2. 确保 Elasticsearch 中 kibana_system 的密码正确
Elasticsearch 会保存用户密码的哈希值。如果 kibana_system 的密码被更改或没有正确初始化,可能会导致认证失败。
验证 kibana_system 密码:
登录到 Elasticsearch 容器:
docker exec -it <es01_container_id> /bin/bash
尝试使用 kibana_system 用户登录:
curl -u kibana_system:123456 https://localhost:9200 -k
如果返回 401 Unauthorized,说明密码不匹配。
重置 kibana_system 密码:
在 Elasticsearch 容器中运行以下命令重置密码:
bin/elasticsearch-reset-password -u kibana_system -i
输入新的密码(与 .env 文件中的 KIBANA_PASSWORD 保持一致),并确认。
如果上面的命令无法运行,那可以使用以下命令,明确指定连接的es服务,我们的配置只有一个es节点 es01
bin/elasticsearch-reset-password -u kibana_system -i \
--url "https://es01:9200"