ShardingSphere (2)Proxy 分库分表 读写分离 数据加密

1,460 阅读3分钟

简介

上篇文章中实现了ShardingSphere的 jdbc版本的分库分表 读写分离 数据加密等功能,今天通过ShardingSphere的示例代码,来研究proxy版本的这些相对应的功能。

ShardingSphere-Proxy 的优势在于对异构语言的支持,以及为 DBA 提供可操作入口。 ShardingSphere-jdbc仅支持java代码,ShardingSphere-Proxy支持任意语言的任意版本

下载

下载tar包

ShardingSphere官方提供了tar的下载包 www.apache.org/dyn/closer.… 下载后解压打开

如果要使用mysql就拷贝合适的驱动包到工程lib目录下

(下载地址:repo1.maven.org/maven2/mysq…

分库分表规则配置

ShardingSphere-Proxy 只提供基于 YAML 的配置方式。

首先配置serverl.yml

authentication:
  users:
    root:
      password: root
    sharding:
      password: sharding
      authorizedSchemas: sharding_db

props:
  max-connections-size-per-query: 1
  executor-size: 16  # Infinite by default.
  proxy-frontend-flush-threshold: 128  # The default value is 128.
    # LOCAL: Proxy will run with LOCAL transaction.
    # XA: Proxy will run with XA transaction.
    # BASE: Proxy will run with B.A.S.E transaction.
  proxy-transaction-type: LOCAL
  proxy-opentracing-enabled: false
  proxy-hint-enabled: false
  query-with-cipher-column: true
  sql-show: false
  check-table-metadata-enabled: false


然后配置 config-sharding.yml

schemaName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://127.0.0.1:3306/demo_ds_0?serverTimezone=UTC&useSSL=false
    username: root
    password:
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
    maintenanceIntervalMilliseconds: 30000
  ds_1:
    url: jdbc:mysql://127.0.0.1:3306/demo_ds_1?serverTimezone=UTC&useSSL=false
    username: root
    password:
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
    maintenanceIntervalMilliseconds: 30000

rules:
- !SHARDING
  tables:
    t_order:
      actualDataNodes: ds_${0..1}.t_order_${0..1}
      tableStrategy:
        standard:
          shardingColumn: order_id
          shardingAlgorithmName: t_order_inline
      keyGenerateStrategy:
        column: order_id
        keyGeneratorName: snowflake
    t_order_item:
      actualDataNodes: ds_${0..1}.t_order_item_${0..1}
      tableStrategy:
        standard:
          shardingColumn: order_id
          shardingAlgorithmName: t_order_item_inline
      keyGenerateStrategy:
        column: order_item_id
        keyGeneratorName: snowflake
  bindingTables:
    - t_order,t_order_item
  broadcastTables:
    - t_address
  defaultDatabaseStrategy:
    standard:
      shardingColumn: user_id
      shardingAlgorithmName: database_inline
  defaultTableStrategy:
    none:
  
  shardingAlgorithms:
    database_inline:
        type: INLINE
        props:
          algorithm-expression: ds_${user_id % 2}
    t_order_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_${order_id % 2}
    t_order_item_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_item_${order_id % 2}
  
  keyGenerators:
    snowflake:
      type: SNOWFLAKE
      props:
          worker-id: 123

启动结果

查看stdout.log里面的启动日志:

image.png

分库分表测试表

在sharding_db中使用

CREATE TABLE `t_order` (  
  `order_id` bigint(20) NOT NULL AUTO_INCREMENT,  
  `user_id` int(11) NOT NULL,  
  `status` varchar(50) COLLATE utf8_bin DEFAULT NULL,  
 PRIMARY KEY (`order_id`) ) 
 ENGINE=InnoDB AUTO_INCREMENT=279205305122816001 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

会在ds_1和ds_0中分别创建两个表t_order_0 和t_order_0 然后执行 insert 语句

INSERT INTO `t_order` (`order_id`, `user_id`, `status`) VALUES (11 ,  0 ,  '2');

proxy就会直接分片好数据

读写分离

首先配置config-replica-query.xml

schemaName: replica_query_db

dataSources:
  primary_ds:
    url: jdbc:mysql://127.0.0.1:3306/demo_ds_master?serverTimezone=UTC&useSSL=false
    username: root
    password: root
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
    maintenanceIntervalMilliseconds: 30000
  replica_ds_0:
    url: jdbc:mysql://127.0.0.1:3306/demo_ds_slave_0?serverTimezone=UTC&useSSL=false
    username: root
    password: root
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
    maintenanceIntervalMilliseconds: 30000
  replica_ds_1:
    url: jdbc:mysql://127.0.0.1:3306/demo_ds_slave_1?serverTimezone=UTC&useSSL=false
    username: root
    password: root
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
    maintenanceIntervalMilliseconds: 30000

rules:
- !REPLICA_QUERY
  dataSources:
    pr_ds:
      name: pr_ds
      primaryDataSourceName: primary_ds
      replicaDataSourceNames:
        - replica_ds_0
        - replica_ds_1

然后在mysql中创建demo_ds_master,demo_ds_slave_0,demo_ds_slave_1三个库。 把server.yaml里面的库改为replica_query_db

  users:
    root:
      password: root
    test:
      password: test
      authorizedSchemas: replica_query_db

测试连接可用:

mysql> show databases;

+------------------+

| Database         |

+------------------+

| replica_query_db |

+------------------+

使用sql 创建表和测试数据

CREATE TABLE `t_order` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO `t_order` (`id`, `name`) VALUES (1'jam' );

这里有一个要点:sharding proxy的读写分离并不会自动主从复制,需要配置数据库自身的主从复制。

所以我们在这里手动在从库添加了两条复制数据,数据写成不一样的来测试读写分离:

mysql> SELECT * FROM `order`;

+------+--------+

| id   | name   |

+------+--------+

|    1 | slave0 |

+------+--------+

mysql> SELECT * FROM `order`;

+------+--------+

| id   | name   |

+------+--------+

|    1 | slave1 |

+------+--------+

1 row in set (0.02 sec)

查询均匀的查到了从库上,没有查询主库,读写分离成功

问题

使用navicat连接3307的库报异常

10002 - 2Unknown exception: [Can not route tables for `[ENGINES]`, please make sure the tables are in same schema.]

使用命令连接以后成功:

mysql -u sharding -psharding -h 127.0.0.1 -P 3307

mysql> show databases;

+-------------+

| Database    |

+-------------+

| sharding_db |

+-------------+

1 row in set (0.02 sec)

这个异常已经在git的issue中存在github.com/apache/shar… 按照我的理解就是navicat连接的时候会顺便运行一大堆sql去查询db的schema。 然后有一部分sharingsphere-proxy并没有完全支持,期待之后的版本能逐步修复这个问题。