Linux下 Nacos集群搭建

22 阅读2分钟

一、Nacos集群

image.png

二、要实现的集群形式如下:需要1个Nginx+3个nacos注册中心+1个mysql

image.png

2.1 我的Nacos 集群结构:

image.png

三、安装Nacos容器

3.1 新建配置目录

#新建配置目录 
mkdir -p /opt/dev_env/nacos/logs/ 
mkdir -p /opt/dev_env/nacos/conf/ 
#新增配置文件 
vim /opt/dev_env/nacos/conf/application.properties

application.properties 配置文件:

#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath=/nacos
### Include message field
server.error.include-message=ALWAYS
### Default web server port:
### windos的bug 端口号不能连续
server.port=8849

#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ip=false

### Specify local server's IP:
# nacos.inetutils.ip-address=


#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
spring.datasource.platform=mysql

### Count of DB:
db.num=1

### Connect URL of DB:
db.url.0=jdbc:mysql://192.168.1.19:3306/nacos_local?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user.0=root
db.password.0=data_cube

### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2

#*************** Naming Module Related Configurations ***************#
### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs
# nacos.naming.distro.taskDispatchPeriod=200

### Data count of batch sync task: Will removed on v2.1.X. Deprecated
# nacos.naming.distro.batchSyncKeyCount=1000

### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs
# nacos.naming.distro.syncRetryDelay=5000

### If enable data warmup. If set to false, the server would accept request without local data preparation:
# nacos.naming.data.warmup=true

### If enable the instance auto expiration, kind like of health check of instance:
# nacos.naming.expireInstance=true

### will be removed and replaced by `nacos.naming.clean` properties
nacos.naming.empty-service.auto-clean=true
nacos.naming.empty-service.clean.initial-delay-ms=50000
nacos.naming.empty-service.clean.period-time-ms=30000

### Add in 2.0.0
### The interval to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.interval=60000

### The expired time to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.expired-time=60000

### The interval to clean expired metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.interval=5000

### The expired time to clean metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.expired-time=60000

### The delay time before push task to execute from service changed, unit: milliseconds.
# nacos.naming.push.pushTaskDelay=500

### The timeout for push task execute, unit: milliseconds.
# nacos.naming.push.pushTaskTimeout=5000

### The delay time for retrying failed push task, unit: milliseconds.
# nacos.naming.push.pushTaskRetryDelay=1000

### Since 2.0.3
### The expired time for inactive client, unit: milliseconds.
# nacos.naming.client.expired.time=180000

#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# nacos.cmdb.dumpTaskInterval=3600

### The interval of polling data change event in seconds:
# nacos.cmdb.eventTaskInterval=10

### The interval of loading labels in seconds:
# nacos.cmdb.labelTaskInterval=300

### If turn on data loading task:
# nacos.cmdb.loadDataAtStart=false


#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
#management.endpoints.web.exposure.include=*

### Metrics for elastic search
management.metrics.export.elastic.enabled=false
#management.metrics.export.elastic.host=http://localhost:9200

### Metrics for influx
management.metrics.export.influx.enabled=false
#management.metrics.export.influx.db=springboot
#management.metrics.export.influx.uri=http://localhost:8086
#management.metrics.export.influx.auto-create-db=true
#management.metrics.export.influx.consistency=one
#management.metrics.export.influx.compressed=true

#*************** Access Log Related Configurations ***************#
### If turn on the access log:
server.tomcat.accesslog.enabled=true

### The access log pattern:
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i

### The directory of access log:
server.tomcat.basedir=file:.

#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
#spring.security.enabled=false

### The ignore urls of auth, is deprecated in 1.2.0:
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**

### The auth system to use, currently only 'nacos' and 'ldap' is supported:
nacos.core.auth.system.type=nacos

### If turn on auth system:
nacos.core.auth.enabled=false

### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.caching.enabled=true

### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
nacos.core.auth.enable.userAgentAuthWhite=false

### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
### The two properties is the white list for auth and used by identity the request from other server.
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security

### worked when nacos.core.auth.system.type=nacos
### The token expiration in seconds:
nacos.core.auth.plugin.nacos.token.expire.seconds=18000
### The default token:
nacos.core.auth.plugin.nacos.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789

### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
#nacos.core.auth.ldap.url=ldap://localhost:389
#nacos.core.auth.ldap.basedc=dc=example,dc=org
#nacos.core.auth.ldap.userDn=cn=admin,${nacos.core.auth.ldap.basedc}
#nacos.core.auth.ldap.password=admin
#nacos.core.auth.ldap.userdn=cn={0},dc=example,dc=org
#nacos.core.auth.ldap.filter.prefix=uid


#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
nacos.istio.mcp.server.enabled=false

#*************** Core Related Configurations ***************#

### set the WorkerID manually
# nacos.core.snowflake.worker-id=

### Member-MetaData
# nacos.core.member.meta.site=
# nacos.core.member.meta.adweight=
# nacos.core.member.meta.weight=

### MemberLookup
### Addressing pattern category, If set, the priority is highest
# nacos.core.member.lookup.type=[file,address-server]
## Set the cluster list with a configuration file or command-line argument
# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# nacos.core.address-server.retry=5
## Server domain name address of [address-server] mode
# address.server.domain=jmenv.tbsite.net
## Server port of [address-server] mode
# address.server.port=8080
## Request address of [address-server] mode
# address.server.url=/nacos/serverlist

#*************** JRaft Related Configurations ***************#

### Sets the Raft cluster election timeout, default value is 5 second
# nacos.core.protocol.raft.data.election_timeout_ms=5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# nacos.core.protocol.raft.data.snapshot_interval_secs=30
### raft internal worker threads
# nacos.core.protocol.raft.data.core_thread_num=8
### Number of threads required for raft business request processing
# nacos.core.protocol.raft.data.cli_service_thread_num=4
### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
### rpc request timeout, default 5 seconds
# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000

#*************** Distro Related Configurations ***************#

### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
# nacos.core.protocol.distro.data.sync.delayMs=1000

### Distro data sync timeout for one sync data, default 3 seconds.
# nacos.core.protocol.distro.data.sync.timeoutMs=3000

### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
# nacos.core.protocol.distro.data.sync.retryDelayMs=3000

### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
# nacos.core.protocol.distro.data.verify.intervalMs=5000

### Distro data verify timeout for one verify, default 3 seconds.
# nacos.core.protocol.distro.data.verify.timeoutMs=3000

### Distro data load retry delay when load snapshot data failed, default 30 seconds.
# nacos.core.protocol.distro.data.load.retryDelayMs=30000

3.2 创建并启动Nacos容器(docker 网桥模式)

docker  run \
--name nacos -d \
-p 8849:8849 \
--privileged=true \
--restart=always \
-e JVM_XMS=256m \
-e JVM_XMX=256m \
-e MODE=cluster \
-e PREFER_HOST_MODE=hostname \
-v /opt/dev_env/nacos/logs:/home/nacos/logs \
-v /opt/dev_env/nacos/conf/application.properties:/home/nacos/conf/application.properties \
nacos/nacos-server

##192.168.81.129
docker run -it -d \
--name nacos \
-p 8849:8849 \
--privileged=true \
--restart=always \
-e PREFER_HOST_MODE=ip \
--ip 192.168.81.129 \
-e JVM_XMS=256m \
-e JVM_XMX=256m \
-e MODE=cluster \
-e NACOS_SERVERS="192.168.81.127:8847 192.168.81.129:8849 192.168.81.128:8851" \
-v /opt/dev_env/nacos/conf/application.properties:/home/nacos/conf/application.properties \
-v /opt/dev_env/nacos/logs:/home/nacos/logs \
nacos/nacos-server


##192.168.81.127
docker run -it -d \
--name nacos \
-p 8847:8847 \
--privileged=true \
--restart=always \
-e PREFER_HOST_MODE=ip \
--ip 192.168.81.127 \
-e JVM_XMS=256m \
-e JVM_XMX=256m \
-e MODE=cluster \
-e NACOS_SERVERS="192.168.81.127:8847 192.168.81.129:8849 192.168.81.128:8851" \
-v /opt/dev_env/nacos/conf/application.properties:/home/nacos/conf/application.properties \
-v /opt/dev_env/nacos/logs:/home/nacos/logs \
nacos/nacos-server


##192.168.81.128
docker run -it -d \
--name nacos \
-p 8851:8851 \
--privileged=true \
--restart=always \
-e PREFER_HOST_MODE=ip \
--ip 192.168.81.128 \
-e JVM_XMS=256m \
-e JVM_XMX=256m \
-e MODE=cluster \
-e NACOS_SERVERS="192.168.81.127:8847 192.168.81.129:8849 192.168.81.128:8851" \
-v /opt/dev_env/nacos/conf/application.properties:/home/nacos/conf/application.properties \
-v /opt/dev_env/nacos/logs:/home/nacos/logs \
nacos/nacos-server

Nacos(192.168.81.129) 启动成功:

image.png

Nacos集群:

image.png

3.3 创建并启动Nacos容器(docker主机模式)

##
docker run -it -d \
--name nacos \
--net=host \
--privileged=true \
--restart=always \
-e PREFER_HOST_MODE=ip \
-e JVM_XMS=256m \
-e JVM_XMX=256m \
-e MODE=cluster \
-e NACOS_SERVERS="192.168.81.127:8847 192.168.81.129:8849 192.168.81.128:8851" \
-v /opt/dev_env/nacos/conf/application.properties:/home/nacos/conf/application.properties \
-v /opt/dev_env/nacos/logs:/home/nacos/logs \
nacos/nacos-server

############################################
这里重点解释一下 
--net=host,
这里是采用docker的主机模式,也就是无网桥,直接占用本机的端口。
网上很多人的例子都是在一台机子下面做多个不同端口的nacos,来模拟伪集群,所以很多人都是-p 8848:8848OK了。
而在不同机子上时,nacos其实不仅仅使用到了8848端口,还用了其他一些端口,比如7848等。
所以,要么你就要暴露所有的端口,要么简单点直接使用--net=host即可。

--restart=always,
在重启docker时,自动启动相关容器
1.no (默认):容器退出后不会自动重启。
2.on-failure[:max-retry]:只有当容器异常退出时(退出代码非零)才会重启。可选参数 max-retry 可以指定最大重启次数,如果省略则默认为无限次重启。
3.always:无论容器退出状态如何,总是重启容器。
4.unless-stopped:除非容器被手动停止,否则总是重启容器。这意味着即使系统重启后,只要容器没有被显式停止,它就会重新启动。
【格式】
docker run --restart=策略 [其他选项] IMAGE [COMMAND] [ARG...]

例如,要运行一个容器,并设置其在遇到失败时最多重启5次,可以使用:
docker run --restart=on-failure:5 your-image-name

如果希望容器在任何情况下都自动重启,可以使用:
docker run --restart=always your-image-name


--privileged=true
container内的root拥有真正的root权限

Nacos集群:

image.png 以上两种docker 容器,比较建议使用主机模式。

四、安装Nginx 做高负载

######修改nginx.conf######
##目录
/usr/local/nginx/conf
##备份
cp nginx.conf nginx.conf.bak
##修改
upstream nacos_server{
 server 192.168.81.127:8847 weight=1 max_fails=1 fail_timeout=10s;
 server 192.168.81.128:8851 weight=1 max_fails=1 fail_timeout=10s;
 server 192.168.81.129:8849 weight=1 max_fails=1 fail_timeout=10s;
}

##修改location
location / {
   # root   html;
   # index  index.html index.htm;
   proxy_pass http://nacos_server;
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header REMOTE-HOST $remote_addr;
   add_header X-Cache $upstream_cache_status;
   add_header Cache-Control no-cache;
}
 
####重启nginx
/usr/local/nginx/sbin/nginx -t
/usr/local/nginx/sbin/nginx -s stop
/usr/local/nginx/sbin/nginx

五、相关笔记

cloud.tencent.com/developer/a…

juejin.cn/post/714430…

cloud.tencent.com/developer/a…

developer.aliyun.com/article/104…

www.jianshu.com/p/0747f5034…

blog.csdn.net/weixin_4558…