2021-09-13 以nacos为配置中心,集成seata

112 阅读6分钟

seata: 分布式事务,由阿里巴巴开源,此示例中用的配置中心是nacos(1.4.1),seata (1.3.0),springboot版本2.3.7

本文配置中心是nacos,nacos的相关安装等,之前的文章中有提到,或者自行百度;

集成分布式事务,大体分为两步
1 搭建seata服务端,注册到nacos上可以看到证明集成完毕
2 在服务中引入seata,然后配置该服务与 seata服务端建立连接,启动成功后会出现,

1.1 下载seata服务端1.3.0 并安装
参照这篇文章:

https://www.cnblogs.com/qjwyss/p/14479242.html

seata下载地址

https://github.com/seata/seata/releases/download/v1.3.0/seata-server-1.3.0.zip

1.2 建表语句在下载的压缩包中是有的,需要自己手动创建一个名字为seata 的数据库
然后运行建表语句,其中有一个表是undo_log,这个表不在seata数据库中,而是加到 你自己的微服务所连接的数据库中;
如果有多个数据库,那么每个库都要添加

image.png

看下我的数据库

image.png

1.3 找到file.conf文件

## transaction log store, only used in seata-server
store {
  ## store mode: file、db、redis
  mode = "db"

  ## file store property
  file {
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }

  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.jdbc.Driver" 
    url = "jdbc:mysql://127.0.0.1:3306/seata" 
 #改为自己的
    user = "root"
    password = "root"
    minConn = 5
    maxConn = 30
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }

  ## redis store property
  redis {
    host = "127.0.0.1"
    port = "6379"
    password = ""
    database = "0"
    minConn = 1
    maxConn = 10
    queryLimit = 100
  }

}

找到registry.conf,在nacos上创建一个seata的命名空间

image.png

下面的namespace 用的就是创建出来的命名空间生成的id

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa 改成nacos 修改nacos的用户名密 码
  type = "nacos"

  nacos {
    application = "seata-server"
    serverAddr = "127.0.0.1:8848"
    group = "SEATA_GROUP"
    namespace = "359d5f6a-92dd-47ec-b595-e8b82f0516ec"
    cluster = "default"
    username = "nacos"
    password = "nacos"
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3  改成nacos 修改nacos的用户名密 码
  type = "nacos"

  nacos {
    serverAddr = "127.0.0.1:8848"
    namespace = "359d5f6a-92dd-47ec-b595-e8b82f0516ec"
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

创建config.txt,与上两个文件同级

transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.my_test_tx_group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
store.mode=db
store.publicKey=
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://localhost:3306/seata?useUnicode=true&rewriteBatchedStatements=true
#改成自己的
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
store.redis.mode=single
store.redis.single.host=127.0.0.1
store.redis.single.port=6379
store.redis.maxConn=10
store.redis.minConn=1
store.redis.maxTotal=100
store.redis.database=0
store.redis.password=
store.redis.queryLimit=100
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

1.3 启动seata
运行 seata/bin 下的命令 seata-server.bat 在nacos中的新增命名空间中看到有seata服务,证明服务端正常启动;

image.png

1.4 自己的微服务的配置,原配置不变的情况下,加入如下配置

srping: 
  cloud:
    alibaba:
      seata:
        tx-service-group: my_test_tx_group  // 每个微服务都是相同的

我的yml完整配置如下

server:
  port: 8810
  version: V1.0.0
spring:
  main:
    allow-bean-definition-overriding: true
  application:
    name: vse-seata-order
  profiles:
    active: dev
  cloud:
    nacos:
      config:
        server-addr: 127.0.0.1:8848
        username: nacos
        password: nacos
        prefix: ${spring.application.name}
        file-extension: yml
      discovery:
        server-addr: 127.0.0.1:8848
        username: nacos
        password: nacos
    alibaba:
      seata:
        tx-service-group: my_test_tx_group
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    type: com.alibaba.druid.pool.DruidDataSource
    druid:
      username: root
      password: root
      url: jdbc:mysql://127.0.0.1:3306/seata_order?allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai&characterEncoding=utf8&autoReconnect=true&useSSL=false&maxReconnects=100&failOverReadOnly=false&initialTimeout=10&zeroDateTimeBehavior=convertToNull
      # 初始连接数
      initialSize: 5
      # 最小连接池数量
      minIdle: 10
      # 最大连接池数量
      maxActive: 20
      # 配置获取连接等待超时的时间
      maxWait: 60000
      # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
      timeBetweenEvictionRunsMillis: 60000
      # 配置一个连接在池中最小生存的时间,单位是毫秒
      minEvictableIdleTimeMillis: 300000
      # 配置一个连接在池中最大生存的时间,单位是毫秒
      maxEvictableIdleTimeMillis: 900000
      # 配置检测连接是否有效
      validationQuery: SELECT 1 FROM DUAL
      testWhileIdle: true
      testOnBorrow: true
      #      testOnReturn: false
      webStatFilter:
        enabled: true
      statViewServlet:
        #设置是否启动监控url查看地址
        enabled: true
        # 设置白名单,不填则允许所有访问
        allow:
        url-pattern: /monitor/druid/*
      filter:
        stat:
          enabled: true
          # 慢SQL记录
          log-slow-sql: true
          slow-sql-millis: 3000
          merge-sql: true
        wall:
          config:
            multi-statement-allow: true
      break-after-acquire-failure: false
      aop-patterns: com.jovision.vsaas.basic.service.mapper.*,com.jovision.vsaas.basic.service.mapper.*

mybatis-plus:
  global-config:
    # 关闭LOGO
    banner: false
  configuration:
    map-underscore-to-camel-case: true
    log-impl: org.apache.ibatis.logging.stdout.StdOutImpl #开启sql日志
  type-aliases-package: com.jovision.vse.*.basic.model,com.jovision.vse.*.basic.dto
feign:
  hystrix:
    enabled: true
#hystrix的超时时间
hystrix:
  command:
    default:
      execution:
        timeout:
          enabled: true
        isolation:
          strategy: SEMAPHORE
          semaphore:
            maxConcurrentRequests: 100
          thread:
            timeoutInMilliseconds: 6000
#ribbon的超时时间
ribbon:
  ReadTimeout: 6000
  ConnectTimeout: 6000

logging:
  level:
    com.alibaba.nacos.client.config.impl: WARN

1.5 启动自己的微服务
当微服务正常启动后,大约一分钟,打印如下信息,证明自己的微服务(seata客户端)与seata服务端,正常连接

: register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x5455ee06, L:/127.0.0.1:65110 - R:/127.0.0.1:8091]
: register success, cost 4 ms, version:1.3.0,role:TMROLE,channel:[id: 0x5455ee06, L:/127.0.0.1:65110 - R:/127.0.0.1:8091]

重点是 这两句话

最后,我项目中的代码地址:

https://gitee.com/flgitee/vse-seata

启动之后,验证地址

http://localhost:8810/order/create?userId=1&productId=1&count=10&money=10

重要机制
可参考这篇大神文章

https://blog.csdn.net/weixin_38308374/article/details/108329792

TM 全局事务管理者(开始全局事务的位置所属 就是全局管理者) ,负责提交或者回滚全局事务,与TC通信,返回全局唯一事务XID;
RM 全局事务下执行的每个与数据库有关的操作就是一个本地事务,一般有多个,与TC通信,返回当前分支的一个分支XID,告知执行结果/状态;
TC 全局事务的状态管理,总协调者,负责判断整体事务的执行与回滚,通知TM 做操作; TC一般就只seata的服务器
(1)全局事务的回滚是如何实现的呢?

Seata 有一个重要的机制:回滚日志。

每个分支事务对应的数据库中都需要有一个回滚日志表 UNDO_LOG,在真正修改数据库记录之前,都会先记录修改前的记录值,以便之后回滚。

在收到回滚请求后,就会根据 UNDO_LOG 生成回滚操作的 SQL 语句来执行。

如果收到的是提交请求,就把 UNDO_LOG 中的相应记录删除掉。

(2)RM 是怎么自动和 TC 交互的?

是通过监控拦截JDBC实现的,例如监控到开启本地事务了,就会自动向 TC 注册、生成回滚日志、向 TC 汇报执行结果。

(3)二阶段回滚失败怎么办?

例如 TC 命令各个 RM 回滚的时候,有一个微服务挂掉了,那么所有正常的微服务也都不会执行回滚,当这个微服务重新正常运行后,TC 会重新执行全局回滚。

本文转自 jimolvxing.blog.csdn.net/article/det…,如有侵权,请联系删除。