上一篇简单说了一下mysql主从复制的配置,但是仅仅只是做了主从复制,应用服务器还需要根据业务来判断读逻辑与写逻辑,还需要做负载均衡...
为解决此问题,此篇文章主要来介绍一种中间件Mycat
基本环境
| 主机 | 端口 | 容器名称 | 容器名称 |
|---|---|---|---|
| 192.168.1.18 | 3306 | percona-master01 | master |
| 192.168.1.18 | 3307 | percona-slave01 | slave |
基本配置
server.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
<system>
<property name="nonePasswordLogin">0</property>
<property name="useHandshakeV10">1</property>
<property name="useSqlStat">0</property>
<property name="useGlobleTableCheck">0</property>
<property name="sequnceHandlerType">2</property>
<property name="subqueryRelationshipCheck">false</property>
<property name="processorBufferPoolType">0</property>
<property name="handleDistributedTransactions">0</property>
<property name="useOffHeapForMerge">1</property>
<property name="memoryPageSize">64k</property>
<property name="spillsFileBufferSize">1k</property>
<property name="useStreamOutput">0</property>
<property name="systemReserveMemorySize">384m</property>
<property name="useZKSwitch">false</property>
</system>
<!--这里是设置的itcast用户和虚拟逻辑库-->
<user name="itcast" defaultAccount="true">
<property name="password">yopurpasswor</property>
<property name="schemas">yourschemas</property>
</user>
</mycat:server>
schema.xml:
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!--配置数据表-->
<schema name="itcast" checkSQLschema="false" sqlMaxLimit="100">
<table name="tb_ad" dataNode="dn1" rule="mod-long" />
</schema>
<!--配置分片关系-->
<dataNode name="dn1" dataHost="cluster1" database="itcast" />
<!--配置连接信息-->
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="3"
writeType="1" dbType="mysql" dbDriver="native" switchType="1"
slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="W1" url="192.168.1.18:3306" user="root"
password="root">
<readHost host="W1R1" url="192.168.1.18:3307" user="root"
password="root" />
</writeHost>
</dataHost>
</mycat:schema>
balance属性说明:
- balance="0", 不开启读写分离机制,所有读操作都发送到当前可用的writeHost 上。
- balance="1",全部的readHost 与stand by writeHost 参与select 语句的负载均衡,简单的说,当双 主 双从模式(M1->S1,M2->S2,并且M1 与M2 互为主备),正常情况下,M2,S1,S2 都参与select 语句的负 载均衡。
- balance="2",所有读操作都随机的在writeHost、readhost 上分发。
- balance="3",所有读请求随机的分发到wiriterHost 对应的readhost 执行,writerHost 不负担读压 力, 注意balance=3 只在1.4 及其以后版本有,1.3 没有。
rule.xml:
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<property name="count">1</property>
</function>
此时链接 192.168.1.18:8066即可测试
多主多从Mycat配置:(数据分片)
基本环境
MySQL集群1:
| 主机 | 端口 | 容器名称 | 容器名称 |
|---|---|---|---|
| 192.168.1.18 | 3306 | percona-master01 | master |
| 192.168.1.18 | 3307 | percona-slave01 | slave |
MySQL集群2:
| 主机 | 端口 | 容器名称 | 容器名称 |
|---|---|---|---|
| 192.168.1.18 | 3316 | percona-master02 | master |
| 192.168.1.18 | 3317 | percona-slave02 | slave |
配置master
#搭建master
#创建目录
mkdir /data/mysql/master02
cd /data/mysql/master02
mkdir conf data
chmod 777 * -R
#创建配置文件
cd /data/mysql/master02/conf
vim my.cnf
#输入如下内容
[mysqld]
log-bin=mysql-bin #开启二进制日志
server-id=1 #服务id,不可重复
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'
#创建容器
docker create --name percona-master02 -v /data/mysql/master02/data:/var/lib/mysql -v /data/mysql/master02/conf:/etc/my.cnf.d -p 3316:3306 -e MYSQL_ROOT_PASSWORD=root percona:5.7.23
#启动
docker start percona-master02 && docker logs -f percona-master02
#创建同步账户以及授权
create user 'itcast'@'%' identified by 'itcast';
grant replication slave on *.* to 'itcast'@'%';
flush privileges;
#查看master状态
show master status;
配置slave
#搭建从库
#创建目录
mkdir /data/mysql/slave02
cd /data/mysql/slave02
mkdir conf data 6 chmod 777 * -R
#创建配置文件
cd /data/mysql/slave02/conf
vim my.cnf
#输入如下内容
[mysqld]
server-id=2 #服务id,不可重复 15 sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'
#创建容器
docker create --name percona-slave02 -v /data/mysql/slave02/data:/var/lib/mysql -v /data/mysql/slave02/conf:/etc/my.cnf.d -p 3317:3306 -e MYSQL_ROOT_PASSWORD=root percona:5.7.23
#启动
docker start percona-slave02 && docker logs -f percona-slave02
#设置master相关信息
CHANGE MASTER TO
master_host='192.168.1.18',
master_user='itcast',
master_password='itcast',
master_port=3316,
master_log_file='xxxxxx',
master_log_pos=xxxx;
#启动同步
start slave;
#查看master状态
show slave status;
配置MyCat
schema.xml:
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!--配置数据表-->
<schema name="itcast" checkSQLschema="false" sqlMaxLimit="100">
<table name="tb_ad" dataNode="dn1,dn2" rule="mod-long" />
</schema>
<!--配置分片关系-->
<dataNode name="dn1" dataHost="cluster1" database="itcast" />
<dataNode name="dn2" dataHost="cluster2" database="itcast" />
<!--配置连接信息-->
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="W1" url="192.168.1.18:3306" user="root" password="root">
<readHost host="W1R1" url="192.168.1.18:3307" user="root" password="root" />
</writeHost>
</dataHost>
<dataHost name="cluster2" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="W2" url="192.168.1.18:3316" user="root" password="root">
<readHost host="W2R1" url="192.168.1.18:3317" user="root" password="root" />
</writeHost>
</dataHost>
</mycat:schema>
rule.xml
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<property name="count">2</property>
</function>
重启mycat测试
./startup_nowrap.sh && tail -f ../logs/mycat.log
MyCat集群
cp mycat mycat2 -R
vim wrapper.conf
#设置jmx端口
wrapper.java.additional.7=-Dcom.sun.management.jmxremote.port=1985
vim server.xml
#设置服务端口以及管理端口
<property name="serverPort">8067</property>
<property name="managerPort">9067</property> \
#重新启动服务
./startup_nowrap.sh
tail -f ../logs/mycat.log
HAProxy实现负载均衡
安装
#拉取镜像
docker pull haproxy:1.9.3
#创建目录,用于存放配置文件
mkdir /haoke/haproxy
#创建容器
docker create --name haproxy --net host -v /haoke/haproxy:/usr/local/etc/haproxy haproxy:1.9.3
配置文件:
#创建文件
vim /haoke/haproxy/haproxy.cfg
#输入如下内容
global
log 127.0.0.1 local2
maxconn 4000
daemon
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen admin_stats
bind 0.0.0.0:4001
mode http
stats uri /dbs
stats realm Global\ statistics
stats auth admin:admin123
listen proxy-mysql
bind 0.0.0.0:4002
mode tcp
balance roundrobin
#代理mycat服务
server mycat_1 192.168.1.18:8066 check port 8066 maxconn 2000
server mycat_2 192.168.1.18:8067 check port 8067 maxconn 2000
启动容器
docker restart haproxy && docker logs -f haproxy
测试
通过web界面进行测试:http://192.168.1.18:4001/dbs
通过navicat链接HAProxy查看数据