docker(五、docker的常见安装)

318 阅读18分钟

4. 安装nexus3

官网:help.sonatype.com/repomanager…

nexus的全称是Nexus Repository Manager,是Sonatype公司的一个产品。它是一个强大的仓库管理器,极大地简化了内部仓库的维护和外部仓库的访问。

主要用它来搭建公司内部的maven私服。但是它的功能不仅仅是创建maven私有仓库这么简单,还可以作为nuget、docker、npm、bower、pypi、rubygems、git lfs、yum、go、apt等的私有仓库

4.1 查询并拉取镜像

[root@localhost ~]# docker search nexus --limit 5
NAME                       DESCRIPTION                           STARS     OFFICIAL   AUTOMATED
sonatype/nexus3            Sonatype Nexus Repository Manager 3   1183
truecharts/nexusoss                                              1
nexusjpl/solr-cloud                                              0
nexusjpl/nexus-webapp                                            0
nexusjpl/solr-cloud-init                                         0

[root@localhost ~]# docker pull sonatype/nexus3
Using default tag: latest
latest: Pulling from sonatype/nexus3
26f1167feaf7: Pull complete
adffa6963146: Pull complete
e88dfbe0ef6a: Pull complete
0d43c5e95446: Pull complete
8ff7b45a7e29: Pull complete
Digest: sha256:eff4fb12346ceb5cd424732ee9f2765c0db0f8d5032cdb2f7f7e17acc349f651
Status: Downloaded newer image for sonatype/nexus3:latest
docker.io/sonatype/nexus3:latest

4.2 创建Nexus3容器

[root@localhost ~]# docker run -itd -p 8081:8081 --privileged=true --name nexus3 -v /data/nexus-data:/var/nexus-data --restart=always sonatype/nexus3
56cf91c3f171ed46bd32edaebc2ceebf4cecb0fb1c9fe1dcbca7c79fdab0d921

[root@localhost ~]# docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED          STATUS         PORTS                                       NAMES
56cf91c3f171   sonatype/nexus3   "sh -c ${SONATYPE_DI…"   10 seconds ago   Up 9 seconds   0.0.0.0:8081->8081/tcp, :::8081->8081/tcp   nexus3

4.3 进入容器查看密码

目录:/nexus-data/admin.password

# 密码为 f91fe420-6c27-4968-a269-0aa9647b6146
[root@localhost ~]# docker exec -it nexus3 /bin/bash
bash-4.4$ cat /nexus-data/admin.password
f91fe420-6c27-4968-a269-0aa9647b6146bash-4.4$

4.4 访问并重置密码

地址:http://192.168.198.100:8081/

点击右上角登录按钮进行登录,登录默认账号admin 密码为上面admin.password文件中的密码。第一次登录需要对密码进行重置 123456

image.png

4.5 创建权限/角色/用户/仓库

权限:

左侧菜单栏 Security->Privileges->Create Privileges->Repository Admin

下面创建了一个“Repository Admin”类型的权限,该权限具有对“maven2”格式的仓库“maven-central”的配置(注意并非仓库内容)的浏览权限。

Name:权限名称,唯一

Format:格式,如maven2,nuget

Repository:仓库,可以选择一个或整个仓库

Actions:动作,如read、browse、edit、delete、create等,可以填写多个,用逗号分隔

image.png

image.png

角色:

左侧菜单栏 Security->Roles->Create role->Nexus role

image.png

用户:

左侧菜单栏 Security->Users->Create local user

image.png

image.png

仓库:

左侧菜单栏 Repository->Repositories->Create repository->maven2(hosted)

这里要注意:

(1) 创建的仓库类型一定要是hosted类型的,否则无法上传

(2)* 演示创建两种仓库 Snapshot和release 其中 Snapshot库发布的版本必须以SNAPSHOT结尾 ,release发布的版本不能以SNAPSHOT结尾

image.png

image.png 创建结束后再次点击进入可以看到具体的仓库地址,这个需要记住,下面会用到

http://192.168.198.100:8081/repository/test_Release/

http://192.168.198.100:8081/repository/test_Snapshot/

image.png

4.6 上传jar到nexus私有仓库

4.6.1 ideal方式上传项目jar包

配置settings.xml文件

<servers>
    <server>
      <!-- id是仓库的名字 username是nexus账号 password是nexus密码 -->  
      <id>test_Release</id>
      <username>admin</username>
      <password>123456</password>
    </server>
	<server>
      <id>test_Snapshot</id>
      <username>admin</username>
      <password>123456</password>
    </server>
</servers>

<mirrors>
    <mirror>
      <!--id 和name自定义 -->
      <id>nexus release</id>
      <name>nexus release repository</name>
      <mirrorOf>*</mirrorOf>
      <url>http://192.168.198.100:8081/repository/test_Release/</url>
    </mirror>
    <mirror>
      <!--id 和name自定义 -->
      <id>nexus snapshot</id>
      <name>nexus snapshot repository</name>
      <mirrorOf>*</mirrorOf>
      <url>http://192.168.198.100:8081/repository/test_Snapshot/</url>
    </mirror>
</mirrors>

配置pom.xml文件

<distributionManagement>
    <repository>
        <!--仓库id 名称-->
        <id>test_Release</id>
        <!--自定义名称-->
        <name>Release</name>
        <!--仓库对应url地址-->
        <url>http://192.168.198.100:8081/repository/test_Release/</url>
    </repository>
    <snapshotRepository>
        <!--仓库id 名称-->
        <id>test_Snapshot</id>
        <!--自定义名称-->
        <name>Snapshot</name>
        <!--仓库对应url地址-->
        <url>http://192.168.198.100:8081/repository/test_Snapshot/</url>
    </snapshotRepository>
</distributionManagement>

打开ideal,点击deploy,控制台出现build success说明就成功上传了。

这里要注意: RELEASE结尾的会进入test_Release仓库,SNAPSHOT结尾的会进入test_Snapshot仓库

<!-- 两种后缀会进入不同的仓库中 -->
<groupId>com.boot.docker</groupId>
<artifactId>docker_boot</artifactId>
<version>0.0.1.RELEASE</version>

<groupId>com.boot.docker</groupId>
<artifactId>docker_boot</artifactId>
<version>0.0.1-SNAPSHOT</version>

image.png

image.png

4.6.2 命令方式上传jar包(单个)

settings.xml文件配置

<server>
  <!-- id是仓库的名字 username是nexus账号 password是nexus密码 -->  
  <id>test_Release</id>
  <username>admin</username>
  <password>123456</password>
</server>
<server>
  <id>test_Snapshot</id>
  <username>admin</username>
  <password>123456</password>
</server>

4.6.2.1 本地打好jar包

如果是已有jar包此步骤可以忽略。命令:

mvn install:install-file -Dfile="‪" -DgroupId="" -DartifactId="" -Dversion="" -Dpackaging="jar"

参数详解:

-Dfile : 本地jar包全路径

-DgroupId: pom中定义的groupId标签内容

-DartifactId: pom中定义的artifactId标签内容

-DartifactId: pom中定义的version标签内容

4.6.2.2 上传jar包到nexus

命令:

mvn deploy:deploy-file -DgroupId="" -DartifactId="" -Dversion="" -Dpackaging="jar" -Dfile="‪" -DrepositoryId="" -Durl=""

参数详解:

-Dfile : 本地jar包全路径

-DgroupId: pom中定义的groupId标签内容

-DartifactId: pom中定义的artifactId标签内容

-DartifactId: pom中定义的version标签内容

-DrepositoryId: 要上传的nexus仓库id

-Durl:nexus仓库地址

<!-- 这里我们以jedis相关jar包做演示 -->
<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>3.1.0</version>
</dependency>

mvn deploy:deploy-file -DgroupId="redis.clients" -DartifactId="jedis" -Dversion="3.1.0" -Dpackaging="jar" -Dfile="D:\soft\maven\apache-maven-3.6.1\repository\redis\clients\jedis\3.1.0\jedis-3.1.0.jar" -DrepositoryId="test_Release" -Durl="http://192.168.198.100:8081/repository/test_Release/"
C:\Users\tianmeng>mvn -v
Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: D:\soft\maven\apache-maven-3.8.6
Java version: 1.8.0_181, vendor: Oracle Corporation, runtime: D:\soft\jdk\jre
Default locale: zh_CN, platform encoding: GBK
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"

C:\Users\tianmeng>mvn deploy:deploy-file -DgroupId="redis.clients" -DartifactId="jedis" -Dversion="3.1.0" -Dpackaging="jar" -Dfile="D:\soft\maven\apache-maven-3.6.1\repository\redis\clients\jedis\3.1.0\jedis-3.1.0.jar" -DrepositoryId="test_Release" -Durl="http://192.168.198.100:8081/repository/test_Release/"
[INFO] Scanning for projects...
Downloading from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/apache/maven/plugins/maven-deploy-plugin/2.7/maven-deploy-plugin-2.7.pom
Downloaded from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/apache/maven/plugins/maven-deploy-plugin/2.7/maven-deploy-plugin-2.7.pom (0 B at 0 B/s)
Downloading from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/apache/maven/plugins/maven-deploy-plugin/2.7/maven-deploy-plugin-2.7.jar
Downloaded from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/apache/maven/plugins/maven-deploy-plugin/2.7/maven-deploy-plugin-2.7.jar (0 B at 0 B/s)
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-deploy-plugin:2.7:deploy-file (default-cli) @ standalone-pom ---
Downloading from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus-utils/1.5.6/plexus-utils-1.5.6.pom
Downloaded from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus-utils/1.5.6/plexus-utils-1.5.6.pom (0 B at 0 B/s)
Downloading from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus/1.0.12/plexus-1.0.12.pom
Downloaded from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus/1.0.12/plexus-1.0.12.pom (0 B at 0 B/s)
Downloading from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus-utils/1.5.6/plexus-utils-1.5.6.jar
Downloaded from alimaven: http://maven.aliyun.com/nexus/content/repositories/central/org/codehaus/plexus/plexus-utils/1.5.6/plexus-utils-1.5.6.jar (0 B at 0 B/s)
Uploading to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/3.1.0/jedis-3.1.0.jarUploaded to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/3.1.0/jedis-3.1.0.jar (646 kB at 2.1 MB/s)
Uploading to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/3.1.0/jedis-3.1.0.pomUploaded to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/3.1.0/jedis-3.1.0.pom (392 B at 3.5 kB/s)
Downloading from test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/maven-metadata.xml
Uploading to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/maven-metadata.xml
Uploaded to test_Release: http://192.168.198.100:8081/repository/test_Release/redis/clients/jedis/maven-metadata.xml (298 B at 3.9 kB/s)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  6.950 s
[INFO] Finished at: 2022-11-25T13:41:54+08:00
[INFO] ------------------------------------------------------------------------

image.png

4.6.3 命令方式上传jar包(多个)

首先将整个maven下的仓库复制到linux如/opt目录下

然后在/otp目录下再创建mavenimport.sh脚本,并赋权限内容如下:

chmod a+x mavenimport.sh

#!/bin/bash 
# copy and run this script to the root of the repository directory containing files 
# this script attempts to exclude uploading itself explicitly so the script name is important 
# Get command line params 

while getopts ":r:u:p:" opt; do
        case $opt in 
            r) REPO_URL="$OPTARG" 
            ;; 
            u) USERNAME="$OPTARG" 
            ;; 
            p) PASSWORD="$OPTARG" 
            ;; 
        esac 
done 

find . -type f -not -path './mavenimport\.sh*' -not -path '*/\.*' -not -path '*/\^archetype\-catalog\.xml*' -not -path '*/\^maven\-metadata\-local*\.xml' -not -path '*/\^maven\-metadata\-deployment*\.xml' | sed "s|^\./||" | xargs -I '{}' curl -u "$USERNAME:$PASSWORD" -X PUT -v -T {} ${REPO_URL}/{} ;

执行导入命令

# 其中-u后面跟着是nexus账号,-p后面跟着是nexus密码,-r后面跟着是仓库地址
./mavenimport.sh -u admin -p 123456 -r http://192.168.198.100:8081/repository/maven-releases/

4.7 从nexus私有仓库拉jar包

以后补充

5. 安装ELK

参考: www.cnblogs.com/yiMro/p/159…

5.1 编辑docker-compose.yml文件

直接编辑docker-compose.yml文件

内容如下:

version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch
    environment:
      - "cluster.name=elasticsearch" #设置集群名称为elasticsearch
      - "discovery.type=single-node" #以单一节点模式启动
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #设置使用jvm内存大小
    volumes:
      - /usr/local/docker/dockercompose/elasticsearch/plugins:/usr/share/elasticsearch/plugins #插件目录
      - /usr/local/docker/dockercompose/elasticsearch/data:/usr/share/elasticsearch/data 
    ports:
      - 9200:9200
      - 9300:9300
  kibana:
    image: kibana:7.8.0
    container_name: kibana
    volumes:
     - /usr/local/docker/dockercompose/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml#挂载配置文件,系统文件:容器内文件
    links:
      - elasticsearch:es #可以用es这个域名访问elasticsearch服务
    depends_on:
      - elasticsearch #kibana在elasticsearch启动之后再启动
    environment:
      - "elasticsearch.hosts=http://es:9200" #设置访问elasticsearch的地址
    ports:
      - 5601:5601
  logstash:
    image: logstash:7.8.0
    container_name: logstash
    volumes:
      - /usr/local/docker/dockercompose/logstash/logstash.conf:/usr/share/logstash/config/logstash.conf #挂载logstash的配置文件
    depends_on:
      - elasticsearch #kibana在elasticsearch启动之后再启动
    links:
      - elasticsearch:es #可以用es这个域名访问elasticsearch服务
    ports:
      - 4560:4560

5.2 添加文件权限

为路径 /usr/local/docker/dockercompose 添加 777 权限,否则elasticsearch启动会报错。

5.3 测试

启动完成后分别进行测试 :

(1) elasticsearch :

浏览器输入 http://localhost:9200/ 或者

终端输入 curl http://localhost:9200/ 来检查es是否安装成功

image.png

(2) kibana :

浏览器输入地址 http://localhost:5601/

6. 安装nacos

6.1 执行数据库文件

sql地址:

github.com/alibaba/nac…

这里要注意: nacos数据库文件随着nacos的版本是有所不同的,我下面贴出来的是以mysql8.0.21为配套的,如果不兼容,当在nacos控制台保存配置文件的时候,会报错,报错信息为 : 发布失败。请检查参数是否正确。

具体内容 :

数据库名 : nacos_config

CREATE TABLE `config_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(255) DEFAULT NULL,
  `content` longtext NOT NULL COMMENT 'content',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  `app_name` varchar(128) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  `c_desc` varchar(256) DEFAULT NULL,
  `c_use` varchar(64) DEFAULT NULL,
  `effect` varchar(64) DEFAULT NULL,
  `type` varchar(64) DEFAULT NULL,
  `c_schema` text,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_aggr   */
/******************************************/
CREATE TABLE `config_info_aggr` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(255) NOT NULL COMMENT 'group_id',
  `datum_id` varchar(255) NOT NULL COMMENT 'datum_id',
  `content` longtext NOT NULL COMMENT '内容',
  `gmt_modified` datetime NOT NULL COMMENT '修改时间',
  `app_name` varchar(128) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';


/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_beta   */
/******************************************/
CREATE TABLE `config_info_beta` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL COMMENT 'content',
  `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_tag   */
/******************************************/
CREATE TABLE `config_info_tag` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
  `tag_id` varchar(128) NOT NULL COMMENT 'tag_id',
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL COMMENT 'content',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_tags_relation   */
/******************************************/
CREATE TABLE `config_tags_relation` (
  `id` bigint(20) NOT NULL COMMENT 'id',
  `tag_name` varchar(128) NOT NULL COMMENT 'tag_name',
  `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
  `nid` bigint(20) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`nid`),
  UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
  KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = group_capacity   */
/******************************************/
CREATE TABLE `group_capacity` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',
  `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
  `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
  `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
  `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',
  `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
  `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = his_config_info   */
/******************************************/
CREATE TABLE `his_config_info` (
  `id` bigint(64) unsigned NOT NULL,
  `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `data_id` varchar(255) NOT NULL,
  `group_id` varchar(128) NOT NULL,
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL,
  `md5` varchar(32) DEFAULT NULL,
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `src_user` text,
  `src_ip` varchar(50) DEFAULT NULL,
  `op_type` char(10) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`nid`),
  KEY `idx_gmt_create` (`gmt_create`),
  KEY `idx_gmt_modified` (`gmt_modified`),
  KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';


/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = tenant_capacity   */
/******************************************/
CREATE TABLE `tenant_capacity` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
  `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
  `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
  `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
  `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',
  `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
  `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';


CREATE TABLE `tenant_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `kp` varchar(128) NOT NULL COMMENT 'kp',
  `tenant_id` varchar(128) default '' COMMENT 'tenant_id',
  `tenant_name` varchar(128) default '' COMMENT 'tenant_name',
  `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc',
  `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source',
  `gmt_create` bigint(20) NOT NULL COMMENT '创建时间',
  `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
  KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';

CREATE TABLE `users` (
	`username` varchar(50) NOT NULL PRIMARY KEY,
	`password` varchar(500) NOT NULL,
	`enabled` boolean NOT NULL
);

CREATE TABLE `roles` (
	`username` varchar(50) NOT NULL,
	`role` varchar(50) NOT NULL,
	UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);

CREATE TABLE `permissions` (
    `role` varchar(50) NOT NULL,
    `resource` varchar(255) NOT NULL,
    `action` varchar(8) NOT NULL,
    UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);

INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE);

INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');

6.2 查询并拉取镜像

[root@localhost ~]# docker search nacos
NAME                                 DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
nacos/nacos-server                   This project contains a Docker image meant t…   340                  [OK]
[root@localhost ~]# docker pull nacos/nacos-server

6.3 运行镜像

注意修改 数据库名 ip 端口 账号 密码

docker run -d -p 8848:8848 -p 9848:9848 \
--name nacos \
--env MODE=standalone \
--env SPRING_DATASOURCE_PLATFORM=mysql \
--env MYSQL_SERVICE_HOST=192.168.198.100 \
--env MYSQL_SERVICE_PORT=3306 \
--env MYSQL_SERVICE_DB_NAME=nacos_config \
--env MYSQL_SERVICE_USER=root \
--env MYSQL_SERVICE_PASSWORD=123456 \
nacos/nacos-server:latest

6.4 测试

浏览器访问 : http://ip:8848/nacos/#/login 账号密码默认 nacos nacos

如果有报错,可以查看日志

[root@localhost ~]# docker exec -it nacos bash
[root@121565357a8d nacos]# cd /home/nacos/logs/
[root@121565357a8d logs]# tail -f nacos.log

image.png

image.png

7. 安装RocketMQ

7.1 基本方式

7.1.1 部署NameServer

# 拉取镜像
docker pull rocketmqinc/rocketmq:4.4.0

# 启动NameServer,暴露9876端口
docker run --name rmqnamesrv -d -p 9876:9876 rocketmqinc/rocketmq:4.4.0 sh mqnamesrv

# 启动成功后测试
[root@localhost ~]# curl localhost:9876
curl: (52) Empty reply from server

7.1.2 部署Broker

# RocketMQ是Java编写的程序,Broker和NameServer都在上面的镜像中,只是启动命令不同而已。
docker run --name rmqbroker -d -p 10911:10911 -p 10909:10909  --link rmqnamesrv:namesrv -e "NAMESRV_ADDR=namesrv:9876" rocketmqinc/rocketmq:4.4.0 sh mqbroker

# -link 将NameServer容器起个别名,Broker中需要配置一个NAMESRV_ADDR参数指向NameServer地址。

# 启动成功后测试
[root@localhost ~]# curl localhost:10911
curl: (52) Empty reply from server

7.1.3 部署RocketMQ可视化界面控制台

# 拉取镜像
docker pull pangliang/rocketmq-console-ng

# 启动容器
docker run --name rmqconsole -d -p 8080:8080 --link rmqnamesrv:namesrv -e "JAVA_OPTS=-Drocketmq.namesrv.addr=namesrv:9876"  pangliang/rocketmq-console-ng

# 启动后测试
[root@localhost ~]# curl localhost:8080
...
<head>
...
</head>
<body ng-controller="AppCtrl">
...
</body>
</html>

也可以通过浏览器访问 : http://192.168.198.100:8080

image.png

7.2 docker-compose方式

7.2.1 broker.conf

创建/data/brokerconf/broker.conf文件,内容如下 :

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.


# 所属集群名字
brokerClusterName=DefaultCluster

# broker 名字,注意此处不同的配置文件填写的不一样,如果在 broker-a.properties 使用: broker-a,
# 在 broker-b.properties 使用: broker-b
brokerName=broker-a

# 0 表示 Master,> 0 表示 Slave
brokerId=0

# nameServer地址,分号分割
# namesrvAddr=rocketmq-nameserver1:9876;rocketmq-nameserver2:9876

# 启动IP,如果 docker 报 com.alibaba.rocketmq.remoting.exception.RemotingConnectException: connect to <192.168.0.120:10909> failed
# 解决方式1 加上一句 producer.setVipChannelEnabled(false);,解决方式2 brokerIP1 设置宿主机IP,不要使用docker 内部IP
# brokerIP1=192.168.0.253

# 在发送消息时,自动创建服务器不存在的topic,默认创建的队列数
defaultTopicQueueNums=4

# 是否允许 Broker 自动创建 Topic,建议线下开启,线上关闭 !!!这里仔细看是 false,false,false
autoCreateTopicEnable=true

# 是否允许 Broker 自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=true

# Broker 对外服务的监听端口
listenPort=10911

# 删除文件时间点,默认凌晨4点
deleteWhen=04

# 文件保留时间,默认48小时
fileReservedTime=120

# commitLog 每个文件的大小默认1G
mapedFileSizeCommitLog=1073741824

# ConsumeQueue 每个文件默认存 30W 条,根据业务情况调整
mapedFileSizeConsumeQueue=300000

# destroyMapedFileIntervalForcibly=120000
# redeleteHangedFileInterval=120000
# 检测物理文件磁盘空间
diskMaxUsedSpaceRatio=88
# 存储路径
# storePathRootDir=/home/ztztdata/rocketmq-all-4.1.0-incubating/store
# commitLog 存储路径
# storePathCommitLog=/home/ztztdata/rocketmq-all-4.1.0-incubating/store/commitlog
# 消费队列存储
# storePathConsumeQueue=/home/ztztdata/rocketmq-all-4.1.0-incubating/store/consumequeue
# 消息索引存储路径
# storePathIndex=/home/ztztdata/rocketmq-all-4.1.0-incubating/store/index
# checkpoint 文件存储路径
# storeCheckpoint=/home/ztztdata/rocketmq-all-4.1.0-incubating/store/checkpoint
# abort 文件存储路径
# abortFile=/home/ztztdata/rocketmq-all-4.1.0-incubating/store/abort
# 限制的消息大小
maxMessageSize=65536

# flushCommitLogLeastPages=4
# flushConsumeQueueLeastPages=2
# flushCommitLogThoroughInterval=10000
# flushConsumeQueueThoroughInterval=60000

# Broker 的角色
# - ASYNC_MASTER 异步复制Master
# - SYNC_MASTER 同步双写Master
# - SLAVE
brokerRole=ASYNC_MASTER

# 刷盘方式
# - ASYNC_FLUSH 异步刷盘
# - SYNC_FLUSH 同步刷盘
flushDiskType=ASYNC_FLUSH

# 发消息线程池数量
# sendMessageThreadPoolNums=128
# 拉消息线程池数量
# pullMessageThreadPoolNums=128

7.2.2 docker-compose.yml

编写docker-compose.yml文件

version: '3.5'
services:
  rmqnamesrv:
    image: foxiswho/rocketmq:server
    container_name: rmqnamesrv
    ports:
      - 9876:9876
    volumes:
      - /data/logs:/opt/logs
      - /data/store:/opt/store
    networks:
        rmq:
          aliases:
            - rmqnamesrv

  rmqbroker:
    image: foxiswho/rocketmq:broker
    container_name: rmqbroker
    ports:
      - 10909:10909
      - 10911:10911
    volumes:
      - /data/logs:/opt/logs
      - /data/store:/opt/store
      - /data/brokerconf/broker.conf:/etc/rocketmq/broker.conf
    environment:
        NAMESRV_ADDR: "rmqnamesrv:9876"
        JAVA_OPTS: " -Duser.home=/opt"
        JAVA_OPT_EXT: "-server -Xms128m -Xmx128m -Xmn128m"
    command: mqbroker -c /etc/rocketmq/broker.conf
    depends_on:
      - rmqnamesrv
    networks:
      rmq:
        aliases:
          - rmqbroker

  rmqconsole:
    image: styletang/rocketmq-console-ng
    container_name: rmqconsole
    ports:
      - 8080:8080
    environment:
        JAVA_OPTS: "-Drocketmq.namesrv.addr=rmqnamesrv:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false"
    depends_on:
      - rmqnamesrv
    networks:
      rmq:
        aliases:
          - rmqconsole

networks:
  rmq:
    name: rmq
    driver: bridge

执行完成后,可以通过浏览器访问 : http://192.168.198.100:8080 效果如下 :

image.png

8. 安装Zipkin

8.1 拉取镜像

docker pull openzipkin/zipkin

[root@localhost ~]# docker pull openzipkin/zipkin
Using default tag: latest
latest: Pulling from openzipkin/zipkin
c0aa4990b596: Pull complete
f61f02e88f83: Pull complete
81a5bee85181: Pull complete
696d2efa2e2f: Pull complete
62513cb5ea01: Pull complete
09f47f38bd8a: Pull complete
c3a9517668cb: Pull complete
fe2e5d9110cb: Pull complete
5284867c7137: Pull complete
Digest: sha256:b3435e485f1e73266dba48ae56814c6731ffb76563a0b809456876f29a575f6b
Status: Downloaded newer image for openzipkin/zipkin:latest
docker.io/openzipkin/zipkin:latest

8.2 创建数据库

Zipkin Server默认会将数据保存到内存中,但这种方式不适合生产环境。一旦Zipkin Server重启,数据就丢失了。Zipkin支持将追踪数据持久化到mysql或者elasticsearch中。

这里演示mysql的情况 :

首先创建名为zipkin_config的数据库,然后执行下面脚本

CREATE TABLE `zipkin_annotations` (
  `trace_id_high` bigint(20) NOT NULL DEFAULT '0' COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` bigint(20) NOT NULL COMMENT 'coincides with zipkin_spans.trace_id',
  `span_id` bigint(20) NOT NULL COMMENT 'coincides with zipkin_spans.id',
  `a_key` varchar(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1',
  `a_value` blob COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB',
  `a_type` int(11) NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation',
  `a_timestamp` bigint(20) DEFAULT NULL COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp',
  `endpoint_ipv4` int(11) DEFAULT NULL COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_ipv6` binary(16) DEFAULT NULL COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address',
  `endpoint_port` smallint(6) DEFAULT NULL COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_service_name` varchar(255) DEFAULT NULL COMMENT 'Null when Binary/Annotation.endpoint is null',
  UNIQUE KEY `trace_id_high` (`trace_id_high`,`trace_id`,`span_id`,`a_key`,`a_timestamp`) COMMENT 'Ignore insert on duplicate',
  KEY `trace_id_high_2` (`trace_id_high`,`trace_id`,`span_id`) COMMENT 'for joining with zipkin_spans',
  KEY `trace_id_high_3` (`trace_id_high`,`trace_id`) COMMENT 'for getTraces/ByIds',
  KEY `endpoint_service_name` (`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames',
  KEY `a_type` (`a_type`) COMMENT 'for getTraces and autocomplete values',
  KEY `a_key` (`a_key`) COMMENT 'for getTraces and autocomplete values',
  KEY `trace_id` (`trace_id`,`span_id`,`a_key`) COMMENT 'for dependencies job'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED;

CREATE TABLE `zipkin_dependencies` (
  `day` date NOT NULL,
  `parent` varchar(255) NOT NULL,
  `child` varchar(255) NOT NULL,
  `call_count` bigint(20) DEFAULT NULL,
  `error_count` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`day`,`parent`,`child`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED;

CREATE TABLE `zipkin_spans` (
  `trace_id_high` bigint(20) NOT NULL DEFAULT '0' COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` bigint(20) NOT NULL,
  `id` bigint(20) NOT NULL,
  `name` varchar(255) NOT NULL,
  `remote_service_name` varchar(255) DEFAULT NULL,
  `parent_id` bigint(20) DEFAULT NULL,
  `debug` bit(1) DEFAULT NULL,
  `start_ts` bigint(20) DEFAULT NULL COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL',
  `duration` bigint(20) DEFAULT NULL COMMENT 'Span.duration(): micros used for minDuration and maxDuration query',
  PRIMARY KEY (`trace_id_high`,`trace_id`,`id`),
  KEY `trace_id_high` (`trace_id_high`,`trace_id`) COMMENT 'for getTracesByIds',
  KEY `name` (`name`) COMMENT 'for getTraces and getSpanNames',
  KEY `remote_service_name` (`remote_service_name`) COMMENT 'for getTraces and getRemoteServiceNames',
  KEY `start_ts` (`start_ts`) COMMENT 'for getTraces ordering and range'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED;

8.3 启动容器

这里注意要修改数据库的名称、账号、密码、ip、port等信息

docker run -d --restart always -p 9411:9411 --name zipkin-mysql openzipkin/zipkin --STORAGE_TYPE=mysql --MYSQL_HOST=192.168.198.100 --MYSQL_TCP_PORT=3306 --MYSQL_DB=zipkin_config --MYSQL_USER=root --MYSQL_PASS=123456

8.4 测试

启动成功后,通过浏览器访问 : http://192.168.198.100:9411 ,效果如下 :

image.png

9. 安装seata

9.1 拉取镜像

docker pull seataio/seata-server:1.4.2 

9.2 运行容器

docker run -d --name seata-server  -e SEATA_IP=192.168.198.100  -e SEATA_PORT=8091 -p 8091:8091  seataio/seata-server:1.4.2

9.3 修改配置文件

共修改两个配置文件registry.conffile.conf

9.3.1 创建目录

作为修改registry.conf和file.conf的目录地址

[root@localhost /]# mkdir -p /usr/local/seata
[root@localhost /]# chmod -R 777 /usr/local/seata/

9.3.2 将文件复制出来

docker cp aa543016e99c:/seata-server/resources/file.conf   /usr/local/seata/file.conf 

docker cp aa543016e99c:/seata-server/resources/registry.conf   /usr/local/seata/registry.conf

9.3.3 修改文件

9.3.3.1 修改registry.conf文件

# -------------------------第一部分-----------------------------------
# 修改前
registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "file"

  nacos {
    application = "seata-server"
    serverAddr = "127.0.0.1:8848"
    group = "SEATA_GROUP"
    namespace = ""
    cluster = "default"
    username = ""
    password = ""
  }
  ...

# 修改后
registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos" # 这里修改成使用nacos的方式

  nacos {
    application = "seata-server" # 服务注册到nacos的名称
    serverAddr = "192.168.198.100:8848" # 这里填写nacos的ip和端口
    group = "DEFAULT_GROUP" # 这里使用默认的分组
    namespace = "0c8d097b-2aaa-45f4-889a-44545ae50d3c" # 这里填写nacos对应的命名空间 我这里因为环境是dev,所以填写的是dev的
    cluster = "default" # 默认不用修改
    username = "nacos" # nacos的账号
    password = "nacos" # nacos的密码
  }
  ...
  
# -------------------------第二部分-----------------------------------  
# 修改前
config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "file"

  nacos {
    serverAddr = "127.0.0.1:8848"
    namespace = ""
    group = "SEATA_GROUP"
    username = ""
    password = ""
    dataId = "seataServer.properties"
  }
  ...

# 修改后 具体值参考上面即可
config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"

  nacos {
    serverAddr = "192.168.198.100:8848"
    namespace = "0c8d097b-2aaa-45f4-889a-44545ae50d3c"
    group = "DEFAULT_GROUP"
    username = "nacos"
    password = "nacos"
    dataId = "seata-dev.properties"
  }
  ...

9.3.3.2 修改file.conf文件

# 修改前
store {
  ## store mode: file、db、redis
  mode = "file"
  ## rsa decryption public key
  publicKey = ""
  
  ...
  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.jdbc.Driver"
    ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param
    url = "jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true"
    user = "mysql"
    password = "mysql"
    ...
  }

# ------------------------------------------------------------------------

# 修改后
store {
  ## store mode: file、db、redis
  mode = "db" # 修改为使用db模式
  ## rsa decryption public key
  publicKey = ""

  ...
  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver" # 这里我是使用的mysql8.0 如果要是想要使用5.x版本 需要改成 com.mysql.jdbc.Driver
    ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param
    url = "jdbc:mysql://192.168.198.100:3306/seata_config?rewriteBatchedStatements=true" # 这里要注意数据库名称为seata_config,一会需要根据这个去创建数据库
    user = "root"
    password = "123456"
    ...
  }

9.4 将文件拷贝回容器

docker cp file.conf aa543016e99c:/seata-server/resources/file.conf
docker cp registry.conf aa543016e99c:/seata-server/resources/registry.conf

9.5 nacos创建对应配置文件

这里因为是dev环境下开发,所以此文件创建在dev对应的命名空间下,名称为对应registry.conf文件里的config.nacos.dataId=seata-dev.properties

seata-dev.properties内容 :

主要修改内容就是关于mysql相关的内容

这里要注意 : 如果调用接口报错io.seata.core.exception.RmTransactionException: Response[ TransactionException[Could not found global transaction xid = xxx.xxx.xxx.xxx:8091:36378942915387401, may be has finished.] ] 就说明是接口时间太长,默认是1000,可以改大一点。

# 二阶段提交未完成状态全局事务重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.committingRetryPeriod=6000
# 二阶段异步提交状态重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.asynCommittingRetryPeriod=6000
# 二阶段回滚状态重试回滚线程间隔时间
server.recovery.rollbackingRetryPeriod=6000
# 超时状态检测重试线程间隔时间 默认1000 单位毫秒 检测出超时将全局事务置入回滚会话管理器
server.recovery.timeoutRetryPeriod=6000
client.tm.defaultGlobalTransactionTimeout=60000

client.tm.degradeCheck=false

client.tm.degradeCheckAllowTimes=10

client.tm.degradeCheckPeriod=2000

client.tm.interceptorOrder=-2147482648

store.mode=db

store.lock.mode=db

store.session.mode=file

store.publicKey=

store.file.dir=file_store/data

store.file.maxBranchSessionSize=16384

store.file.maxGlobalSessionSize=512

store.file.fileWriteBufferCacheSize=16384

store.file.flushDiskMode=async

store.file.sessionReloadReadSize=100

store.db.datasource=druid

store.db.dbType=mysql

store.db.driverClassName=com.mysql.cj.jdbc.Driver

store.db.url=jdbc:mysql://192.168.198.100:3306/seata_config?useSSL=true&serverTimezone=GMT

store.db.user=root

store.db.password=123456

store.db.minConn=5

store.db.maxConn=30

store.db.globalTable=global_table

store.db.branchTable=branch_table

store.db.distributedLockTable=distributed_lock

store.db.queryLimit=100

store.db.lockTable=lock_table

store.db.maxWait=5000

store.redis.mode=single

store.redis.single.host=127.0.0.1

store.redis.single.port=6379

store.redis.sentinel.masterName=

store.redis.sentinel.sentinelHosts=

store.redis.maxConn=10

store.redis.minConn=1

store.redis.maxTotal=100

store.redis.database=0

store.redis.password=

store.redis.queryLimit=100

server.recovery.committingRetryPeriod=1000

server.recovery.asynCommittingRetryPeriod=1000

server.recovery.rollbackingRetryPeriod=1000

server.recovery.timeoutRetryPeriod=1000

server.maxCommitRetryTimeout=-1

server.maxRollbackRetryTimeout=-1

server.rollbackRetryTimeoutUnlockEnable=false

server.distributedLockExpireTime=10000

client.undo.dataValidation=true

client.undo.logSerialization=jackson

client.undo.onlyCareUpdateColumns=true

server.undo.logSaveDays=7

server.undo.logDeletePeriod=86400000

client.undo.logTable=undo_log

client.undo.compress.enable=true

client.undo.compress.type=zip

client.undo.compress.threshold=64k

log.exceptionRate=100

transport.serialization=seata

transport.compressor=none

metrics.enabled=false

metrics.registryType=compact

metrics.exporterList=prometheus

metrics.exporterPrometheusPort=9898

tcc.fence.logTableName=tcc_fence_log

tcc.fence.cleanPeriod=1h

server.session.branchAsyncQueueSize=5000

server.session.enableBranchAsyncRemove=true

service.vgroupMapping.default_tx_group=default

service.vgroupMapping.uat_tx_group=default

service.vgroupMapping.test_tx_group=default

service.vgroupMapping.prod_tx_group=default

9.6 数据库创建数据库和表

数据库名 : seata_config

CREATE TABLE `lock_table`  (
  `row_key` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `xid` varchar(96) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `transaction_id` mediumtext CHARACTER SET utf8 COLLATE utf8_general_ci NULL,
  `branch_id` mediumtext CHARACTER SET utf8 COLLATE utf8_general_ci NULL,
  `resource_id` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `table_name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `pk` varchar(36) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `gmt_create` datetime(0) NULL DEFAULT NULL,
  `gmt_modified` datetime(0) NULL DEFAULT NULL,
  PRIMARY KEY (`row_key`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '存储锁表' ROW_FORMAT = Dynamic;
 
CREATE TABLE `global_table`  (
  `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `transaction_id` bigint(20) NULL DEFAULT NULL,
  `status` tinyint(4) NOT NULL,
  `application_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `transaction_service_group` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `transaction_name` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `timeout` int(11) NULL DEFAULT NULL,
  `begin_time` bigint(20) NULL DEFAULT NULL,
  `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `gmt_create` datetime(0) NULL DEFAULT NULL,
  `gmt_modified` datetime(0) NULL DEFAULT NULL,
  PRIMARY KEY (`xid`) USING BTREE,
  INDEX `idx_gmt_modified_status`(`gmt_modified`, `status`) USING BTREE,
  INDEX `idx_transaction_id`(`transaction_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '分支事务表' ROW_FORMAT = Dynamic;

CREATE TABLE `branch_table`  (
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `transaction_id` bigint(20) NULL DEFAULT NULL,
  `resource_group_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `resource_id` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `lock_key` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `branch_type` varchar(8) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `status` tinyint(4) NULL DEFAULT NULL,
  `client_id` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `gmt_create` datetime(0) NULL DEFAULT NULL,
  `gmt_modified` datetime(0) NULL DEFAULT NULL,
  PRIMARY KEY (`branch_id`) USING BTREE,
  INDEX `idx_xid`(`xid`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '全局事务表' ROW_FORMAT = Dynamic;

9.7 重启seata容器

查看日志命令 : docker logs seata-server

docker restart seata-server

9.8 将来其他数据库执行的建表命令

CREATE TABLE `undo_log`  (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime(0) NOT NULL,
  `log_modified` datetime(0) NOT NULL,
  `ext` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE,
  UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 5 CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;

10. 安装Sentinel-dashboard

10.1 拉取镜像

docker pull bladex/sentinel-dashboard:1.7.0

10.2 运行镜像

docker run --name sentinel -d -p 8858:8858 bladex/sentinel-dashboard:1.7.0

10.3 访问页面

浏览器访问 http://ip+8858 默认 账号: sentinel 密码: sentinel

image.png

image.png