🍠超简单的Canal来同步你的Mysql

600 阅读2分钟

小知识,大挑战!本文正在参与“程序员必备小知识”创作活动。

简介

有时候当数据库有更新时,我们会去写逻辑代码更新一些其他表,或者更新缓存、ES等操作。这样写逻辑复杂,代码耦合严重。这时候就需要阿里的开源中间件canal来做处理了。canal能通过binlog方式监控到数据库的变化,并发送到kafka指定的topic内,我们就可以监听这个topic,做接下来的变更同步操作。

准备

canal下载安装

参考# Canal概念与入门

canal配置

  • 增加canal监控目录名 在canal.properties定义了canal.destinations后,需要canal.conf.dir对应的目录下建立同名的文件
# 当前server上部署的instance列表
canal.destinations = test
  • 修改canal配置的kafka信息
canal.mq.servers = localhost:9092,localhost:9093
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 52428800
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
  • 在conf下创建test目录
  • 进入test目录,创建文件instance.properties
#################################################
## mysql serverId , v1.0.26+ will autoGen
canal.instance.mysql.slaveId=1234

# position info
canal.instance.master.address= czd-rpdt.mysql.rds.aliyuncs.com:3306

canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=

# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=

# table meta tsdb info
canal.instance.tsdb.enable=true

# username/password
canal.instance.dbUsername=binlog
canal.instance.dbPassword=Linlongb1nL0g
canal.instance.defaultDatabaseName =
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

# table regex
# canal.instance.filter.regex=.\*\\\\..\*
canal.instance.filter.regex=app.user

# mq config
canal.mq.topic=test-canal
# dynamic topic route by schema or table regex
# hash partition config
canal.mq.partitionsNum=12
canal.mq.partitionHash=app.user.id

使用Springboot处理变化数据

kafka依赖

  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter</artifactId>
  </dependency>

  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
  </dependency>
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
      <optional>true</optional>
  </dependency>
  <!-- Spring关于kafka的依赖 -->
  <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
      <exclusions>
          <exclusion>
              <groupId>org.apache.kafka</groupId>
              <artifactId>kafka-clients</artifactId>
          </exclusion>
      </exclusions>
  </dependency>
  <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>2.1.0</version>
  </dependency>

数据变化监听

数据监听器,当有kafka数据变化时监听到并发送到topic中,程序中接到kafka消息后开始进行业务处理

@Slf4j
@Component
public class DataListener {


    @KafkaListener(topics={"${kafka.consumer.topic}"})
    public void processDataChange(ConsumerRecord<?, ?> record) {
        String message = record.value().toString();
        // dataHandle是数据变化处理方法
        DataListener.this.dataHandle(message);
    }
    
    private void dataHandle(String message){
       Message message = JSONObject.parseObject(message, Message.class);
		log.info("数据库[{}],表[{}],变化类型[{}]", message.getDatabase(),message.getTable(), message.getType());
        // 处理逻辑
    }
}

消息体封装如下,canal发到kafka的数据都在data中,内容是数据库变动的一行数据

@Data
public class Message {
    private List<Object> data;
    private String database;
    private long es;
    private int id;
    private boolean isDdl;
    private Map mysqlType;
    private List<Object> old;
    private List<String> pkNames;
    private String sql;
    private Map sqlType;
    private String table;
    private long ts;
    private String type;

}

参考