说明
一般搭建集群的目的主要有两个:
-
效率,通过并行和并发提升;
-
稳定性:
-
数据备份解决单个数据节点挂掉导致服务不可用的问题;
-
主节点高可用解决单点问题;
-
首先,根据我目前查看的官方文档和网上大部分的clickhouse集群相关博客来看,clickhouse集群并没有HA的方案,或者说有HA(某个节点挂了集群还可用),但是没有自动故障转移。
clickhouse既没有hdfs那样通过namespaces去根据故障自动切换主节点,也没有像redis那样通过额外的哨兵服务去做主节点的故障转移,客户端连接到的clickhouse节点挂掉后,对于客户端而言(如果连接的是某个具体的服务节点)整个clickhouse服务就算是不可用了(虽然手动让客户端去切换存活的其他clickhouse节点后服务依旧可用)。甚至clickhouse集群都没有主从的概念,具体后面详说。
clickhouse集群依赖两种表引擎实现:
-
ReplicatedMergeTree:
- 官网中Data Replication说的是MergeTree中Replicated的相关表引擎,可进行数据的备份:
- ReplicatedMergeTree
- ReplicatedSummingMergeTree
- ReplicatedReplacingMergeTree
- ReplicatedAggregatingMergeTree
- ReplicatedCollapsingMergeTree
- ReplicatedVersionedCollapsingMergeTree
- ReplicatedGraphiteMergeTree
- 这些引擎的数据备份是作用在具体的表身上的,通过zookeeper存储副本信息,每个节点都是整个表的全量数据备份,又因为每个节点都能单独对外提供服务,将客户端请求通过nginx之类的工具分发到不同节点确实也能提高对外服务的能力;
- 官网中Data Replication说的是MergeTree中Replicated的相关表引擎,可进行数据的备份:
-
Distributed:
-
Distributed表引擎更像是视图一样的存在,并不会真正存储数据,实际存储数据的还是本地的表,本地表存储着分布式表的一部分数据(由分片数和节点权重控制),关于权重,官网有一段解释:
Each shard can have a weight defined in the config file. By default, the weight is equal to one. Data is distributed across shards in the amount proportional to the shard weight. For example, if there are two shards and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19.
-
数据默认是异步非阻塞的方式同步到集群相关备份节点的,而集群节点完全是通过配置文件配置;
-
集群节点的负载可通过配置权重和算法进行控制,各节点权重默认相同都是1,算法默认为
Random,可选算法详见:load_balancing; -
除了每个节点是表的一个分片数据外,还可以对数据分片进行备份配置。
-
理论上通过Distributed + 本地表(任意表引擎)就可以实现clickhouse的集群搭建了,但是Distributed表引擎的数据同步会有些许问题,所以Replicated相关表引擎作为本地表引擎会更稳妥。
-
clickhouse单节点部署
集群搭建是通过配置搭建的,需要搭建几个节点先根据单节点方式到对应部署,然后再做配置即可,单机安装参照我的另一篇文章clickhouse单机安装。
ReplicatedMergeTree
前面有说MergeTree表引擎家族中Replicated相关的表引擎都可以依赖zookeeper进行数据备份,这里以ReplicatedMergeTree示例。
zookeeper集群搭建
zookeeper可以用已存在的集群,没有的话参照以下方式搭建。
-
下载zookeeper:Apache ZooKeeper™ Releases
-
将上传好的压缩包上传服务器后解压,以
zookeeper-3.4.14为例:tar -zxf ~/softs/zookeeper-3.4.14.tar.gz -C /opt/ -
复制配置文件:
cd /opt/zookeeper-3.4.14/conf cp zoo_sample.cfg zoo.cfg -
在
zoo.cfg中修改增加以下内容:dataDir=/data/zookeeper # 允许每个客户端的最大连接数 #maxClientCnxns=60 # 集群信息配置,配置了这个之后需要在每个节点的dataDir下新建myid文件,填入对应的id: 1,2,3 server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888 -
在dataDir目录下创建
myid文件,填入node对应的serverID,如node1:mkdir -p /data/zookeeper echo 1 > /data/zookeeper/myid -
配置环境变量,
/etc/profile中添加一下内容:# ZOOKEEPER_HOME export ZOOKEEPER_HOME=/opt/zookeeper-3.4.14 export PATH=$PATH:$ZOOKEEPER_HOME/bin -
source /etc/profile使得配置生效; -
启动并查看服务:
# 三个节点都执行 zkServer.sh start zkServer.sh status # 出现以下信息即可 ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
clickhouse中配置zk集群
在/etc/clickhouse-server/config.xml中有如下配置:
<zookeeper incl="zookeeper-servers" optional="true" />
incl表示可通过外部文件查找该名称节点的信息引入,zookeeper可以配置在/etc/clickhouse-server/config.xml,也可以配置在外部配置文件中,这里采用外部配置的方式,先在/etc/clickhouse-server/config.xml引入外部配置文件,注意如果不配置默认引入/etc/metrika.xml配置文件。
-
在
/etc/clickhouse-server/config.xml添加如下配置:<include_from>/etc/clickhouse-server/metrika.xml</include_from> -
然后在
/etc/clickhouse-server/metrika.xml填入一下内容,zookeeper节点根据环境修改,节点名称要和incl一样:<yandex> <zookeeper-servers> <node index="1"> <host>node2</host> <port>2181</port> </node> <node index="2"> <host>node3</host> <port>2181</port> </node> <node index="3"> <host>node4</host> <port>2181</port> </node> </zookeeper-servers> </yandex> -
将配置文件分发至所有节点:
scp ./config.xml ./metrika.xml root@node2:$PWD -
重启clickhouse服务器:
service clickhouse-server restart
ReplicatedMergeTree表测试
-
首先在一个节点上根据官方文档OnTime创表和导入数据:
-
下载数据:
for s in `seq 1987 2018` do for m in `seq 1 12` do wget https://transtats.bts.gov/PREZIP/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_${s}_${m}.zip done done -
创建表:
CREATE TABLE `ontime` ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = MergeTree PARTITION BY Year ORDER BY (Carrier, FlightDate) SETTINGS index_granularity = 8192; -
导入数据:
for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done -
查看数据导入数据,因为下载很慢,我这里只下载了一年的数据:
select count(1) from ontime; SELECT count(1) FROM ontime ┌─count(1)─┐ │ 3900561 │ └──────────┘ 1 rows in set. Elapsed: 0.019 sec.
-
-
创建ReplacingMergeTree表:
-
我这里只用两个节点做下测试,在两个节点中都创建
ontime_zk表:CREATE TABLE ontime_zk ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/ontime_zk', 'replica1', FlightDate, (Year, FlightDate), 8192); --node2 中将replica1改为replica2 -
两个节点都创建完后可以在zookeeper中查看对应节点信息如下:
ls /clickhouse/tables/ontime_zk/replicas [replica1, replica2] -
在node1上往
ontime_zk插入数据:-- node1上进行数据insert INSERT into ontime_zk select * from ontime limit 100000; -- node1,node2都执行,结果一致,数据自动全量同步,且数据同步是双向的,也就是说去node2进行数据插入,也会同步到node1 SELECT COUNT(1) from ontime_zk oz ; SELECT COUNT(1) FROM ontime_zk AS oz ┌─COUNT(1)─┐ │ 100000 │ └──────────┘ 1 rows in set. Elapsed: 0.002 sec.
-
Distributed
Distributed表引擎是通过配置确定表分片和副本信息,我这里用四个节点node1~node4做两分片两副本的分布式表测试:
shard1: node1, node2
shard2: node3, node4
在/etc/clickhouse-server/config.xml中有关于集群的配置信息,里面已经有配置一些集群信息,可以删除掉:
<remote_servers incl="clickhouse_remote_servers" >
...
</remote_servers>
同理可以在/etc/clickhouse-server/config.xml也可以在外部配置文件中配置,前面已经配置了引入外部配置文件,这里同样采用外部配置的方式在/etc/clickhouse-server/metrika.xml填入以下内容:
<yandex>
<!-- 集群配置 -->
<clickhouse_remote_servers>
<!-- 2分片2备份 -->
<ck_cluster>
<!-- 数据分片1 -->
<shard>
<replica>
<host>node1</host>
<port>9000</port>
<!-- 用户名和密码要设置,不然会报错:
Authentication failed: password is incorrect or there is no user with such name
-->
<user>default</user>
<password>default</password>
</replica>
<replica>
<host>node2</host>
<port>9000</port>
<user>default</user>
<password>default</password>
</replica>
</shard>
<!-- 数据分片2 -->
<shard>
<replica>
<host>node3</host>
<port>9000</port>
<user>default</user>
<password>default</password>
</replica>
<replica>
<host>node4</host>
<port>9000</port>
<user>default</user>
<password>default</password>
</replica>
</shard>
</ck_cluster>
</clickhouse_remote_servers>
</yandex>
然后将metrika.xml分发到所有节点,可在数据库中查询到集群信息:
select * from system.clusters;
SELECT *
FROM system.clusters
┌─cluster────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ ck_cluster │ 1 │ 1 │ 1 │ node1 │ 192.168.0.85 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ ck_cluster │ 1 │ 1 │ 2 │ node2 │ 192.168.0.86 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ ck_cluster │ 2 │ 1 │ 1 │ node3 │ 192.168.0.87 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ ck_cluster │ 2 │ 1 │ 2 │ node4 │ 192.168.0.88 │ 9000 │ 0 │ default │ │ 0 │ 0 │
└────────────┴───────────┴──────────────┴─────────────┴───────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
4 rows in set. Elapsed: 0.003 sec.
下面开始分布式表的测试:
-
首先在4个节点都创建本地表,表引擎为MergeTree:
CREATE TABLE ontime_local ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = MergeTree PARTITION BY Year ORDER BY (Carrier, FlightDate) SETTINGS index_granularity = 8192; -
然后在其中一个节点创建分布式表:
-- 这个ON CLUSTER要写,不然只会在执行所在节点创建ontime_cluster表 CREATE TABLE ontime_cluster ON CLUSTER ck_cluster ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = Distributed(ck_cluster, test, ontime_local, rand()); -- ck_cluster为在哪个集群分布,test为数据库名称,ontime_local为本地表名称,rand()为数据分布方法,或者叫做sharding key /** 关于rand()这个参数,官网有个解释:The sharding expression can be any expression from constants and table columns that returns an integer. For example, you can use the expression rand() for random distribution of data, or UserID for distribution by the remainder from dividing the user’s ID */ -
往ontime_cluster中写入数据:
-- 首先看看ontime中有多少数据 select count(1) from ontime; SELECT count(1) FROM ontime ┌─count(1)─┐ │ 3900561 │ └──────────┘ 1 rows in set. Elapsed: 0.001 sec. -- 往ontime_cluster中写入数据 INSERT into ontime_cluster select * from ontime ; -- 查看ontime_cluster有多少数据,和ontime select count(1) from ontime_cluster; SELECT count(1) FROM ontime_cluster ┌─count(1)─┐ │ 3900561 │ └──────────┘ 1 rows in set. Elapsed: 0.010 sec. -- 查看ontime_local有多少数据,一共两个分片,数据大概就是一半 select count(1) from ontime_local; SELECT count(1) FROM ontime_local ┌─count(1)─┐ │ 1950884 │ └──────────┘ 1 rows in set. Elapsed: 0.004 sec. -
查询分布式表的时候,查询会自动分发到各个分片,相同分片会找其中一个副本进行查询,也就说所有分片只要保证有一个副本可用,整个集群就是可用的。但我为什么说他没有自动故障转移是因为没有提供统一对外连接入口,数据库连接的时候连的是具体的节点比如node1,如果node1挂掉,虽然集群依旧可用,但对客户端而已就不可用了。生产时应该可以通过nginx去做连接代理;
-
集群中并没有主从的概念,每一个节点都可以完整的提供增删查改的功能,这里测试往node2中插入数据:
# 表结构一样,我直接将下载的数据导入到ontime_cluster for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host node2 --password default --query="INSERT INTO test.ontime_cluster FORMAT CSVWithNames"; done-- 然后去node1上查询ontime_cluster翻了一倍 select count(1) from ontime_cluster; SELECT count(1) FROM ontime_cluster ┌─count(1)─┐ │ 7801122 │ └──────────┘ 1 rows in set. Elapsed: 0.023 sec. select count(1) from ontime_local; -- 本地表也基本是两倍的量 SELECT count(1) FROM ontime_local ┌─count(1)─┐ │ 3901207 │ └──────────┘ 1 rows in set. Elapsed: 0.004 sec. -
truncate table ontime_cluster;并不会清空表的数据,因为分布式表并不真正存储数据,目前我看到要删除数据只能通过本地表删除,不知道有没有更好的方法。
HA方案
MergeTree + Distributed
这种方案本地表引擎为MergeTree(任何表引擎问题都不大,主要为了体现Distributed的数据同步和Replicated相关表引擎的数据同步的区别),再创建个Distributed的分布式表,其实就是Distributed章节中的内容,这种方案是利用的是集群的数据同步功能(具体还没来得及详细了解)但是网上说会存在问题:
- 如果node1临时宕机,从宕机开始到恢复,期间的增量数据是可以补全的,依赖的node2上的推送机制,会有临时目录;
- 但是,如果node1彻底玩完,硬盘坏了,无法恢复,只能重做,加入一个node5节点来替换node1,这时候问题就来了,存量数据无法恢复;
此外我还发现了其他问题,首先看下官网上关于往集群中写入数据的说明:
First, you can define which servers to write which data to and perform the write directly on each shard. In other words, perform INSERT in the tables that the distributed table “looks at”. This is the most flexible solution as you can use any sharding scheme, which could be non-trivial due to the requirements of the subject area. This is also the most optimal solution since data can be written to different shards completely independently.
Second, you can perform INSERT in a Distributed table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a Distributed table, it must have a sharding key set (the last parameter). In addition, if there is only one shard, the write operation works without specifying the sharding key, since it doesn’t mean anything in this case.
简单来说就是有两种方法:
- 根据业务自己决定不同的业务往不同的分片的本地表写入数据;
- 往分布式表中写入数据,数据会根据建表的分布方法将数据分发到不同的分片存储;
先回忆下关于集群的配置信息,集群为4个节点,两分片两副本:
shard1: node1, node2
shard2: node3, node4
我的测试过程如下:
-
先在node1中查看分布式表和本地表得数据量:
SELECT COUNT(1) FROM ontime_cluster oc ; SELECT COUNT(1) FROM ontime_cluster AS oc ┌─COUNT(1)─┐ │ 7801232 │ └──────────┘ 1 rows in set. Elapsed: 0.010 sec. SELECT COUNT(1) FROM ontime_local ol ; SELECT COUNT(1) FROM ontime_local AS ol ┌─COUNT(1)─┐ │ 3901256 │ └──────────┘ 1 rows in set. Elapsed: 0.006 sec. -
往node1本地表中插入200条记录:
INSERT into table ontime_local select * from ontime limit 200; -
查看node1插入数据后的分布式表和本地表数据量,分布式表和本地表都显示多了200条记录:
SELECT COUNT(1) FROM ontime_cluster oc ; SELECT COUNT(1) FROM ontime_cluster AS oc ┌─COUNT(1)─┐ │ 7801432 │ └──────────┘ 1 rows in set. Elapsed: 0.015 sec. SELECT COUNT(1) FROM ontime_local ol ; SELECT COUNT(1) FROM ontime_local AS ol ┌─COUNT(1)─┐ │ 3901456 │ └──────────┘ 1 rows in set. Elapsed: 0.019 sec. -
再去node2(node2和node1为相同的分片,互为备份)上查看分布式表和本地表数据量,此时问题就来了,数据量还是没变,且重启服务也如此:
SELECT COUNT(1) FROM ontime_cluster oc ; SELECT COUNT(1) FROM ontime_cluster AS oc ┌─COUNT(1)─┐ │ 7801232 │ └──────────┘ 1 rows in set. Elapsed: 0.013 sec. SELECT COUNT(1) FROM ontime_local ol ; SELECT COUNT(1) FROM ontime_local AS ol ┌─COUNT(1)─┐ │ 3901256 │ └──────────┘ 1 rows in set. Elapsed: 0.005 sec. -
去node3(node3与node1为不同分片)上查看分布式表的数据量,这时会有新的问题,查询结果一下为成功加了200条记录的,一下又为增加数据前的记录:
SELECT COUNT(1) FROM ontime_cluster oc ; SELECT COUNT(1) FROM ontime_cluster AS oc ┌─COUNT(1)─┐ │ 7801432 │ └──────────┘ 1 rows in set. Elapsed: 0.010 sec. SELECT COUNT(1) FROM ontime_cluster oc ; SELECT COUNT(1) FROM ontime_cluster AS oc ┌─COUNT(1)─┐ │ 7801232 │ └──────────┘ 1 rows in set. Elapsed: 0.005 sec. -
问题似乎是直接往本地表中写入的数据没有同步到备份节点,但具体原因和和解决方案需要再花时间研究。不过直接往分布式表中写入数据不会存在这种问题。
ReplicatedMergeTree+ Distributed
这种方案本地表引擎为ReplicatedMergeTree,再创建个Distributed的分布式表,就是将本地表从MergeTree变成ReplicatedMergeTree,数据的同步从依赖数据库的同步变成依赖zookeeper。
-
先将本地表删除和分布表:
-- 每个节点执行 DROP table ontime_local; -- 其中一个节点执行即可 DROP table ontime_cluster on cluster ck_cluster; -
首先在
/etc/clickhouse-server/metrika.xml中加入以下配置,各节点配置对应的即可,方便后面建表:<!-- node1 --> <macros> <shard>01</shard> <replica>01</replica> </macros> <!-- node2 --> <macros> <shard>01</shard> <replica>02</replica> </macros> <!-- node3 --> <macros> <shard>02</shard> <replica>01</replica> </macros> <!-- node4 --> <macros> <shard>02</shard> <replica>02</replica> </macros> -
每个节点创建表引擎为ReplicatedMergeTree的本地表:
CREATE TABLE ontime_zk_local ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime_zk_local', '{replica}', FlightDate, (Year, FlightDate), 8192); -
然后创建以
ontime_zk_local问本地表的分布式表:CREATE TABLE ontime_cluster ON CLUSTER ck_cluster ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `UniqueCarrier` FixedString(7), `AirlineID` Int32, `Carrier` FixedString(2), `TailNum` String, `FlightNum` String, `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` String, `OriginState` FixedString(2), `OriginStateFips` String, `OriginStateName` String, `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` String, `DestState` FixedString(2), `DestStateFips` String, `DestStateName` String, `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` String, `DepTimeBlk` String, `TaxiOut` Int32, `WheelsOff` Int32, `WheelsOn` Int32, `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` Int32, `ArrTimeBlk` String, `Cancelled` UInt8, `CancellationCode` FixedString(1), `Diverted` UInt8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` UInt8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` String, `TotalAddGTime` String, `LongestAddGTime` String, `DivAirportLandings` String, `DivReachedDest` String, `DivActualElapsedTime` String, `DivArrDelay` String, `DivDistance` String, `Div1Airport` String, `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` String, `Div1TotalGTime` String, `Div1LongestGTime` String, `Div1WheelsOff` String, `Div1TailNum` String, `Div2Airport` String, `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` String, `Div2TotalGTime` String, `Div2LongestGTime` String, `Div2WheelsOff` String, `Div2TailNum` String, `Div3Airport` String, `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` String, `Div3TotalGTime` String, `Div3LongestGTime` String, `Div3WheelsOff` String, `Div3TailNum` String, `Div4Airport` String, `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` String, `Div4TotalGTime` String, `Div4LongestGTime` String, `Div4WheelsOff` String, `Div4TailNum` String, `Div5Airport` String, `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` String, `Div5TotalGTime` String, `Div5LongestGTime` String, `Div5WheelsOff` String, `Div5TailNum` String ) ENGINE = Distributed(ck_cluster, test, ontime_zk_local, rand()); -
往集群中写入数据,然后查看分布式表和本地表的数据:
INSERT into ontime_cluster select * from ontime ; SELECT COUNT(1) FROM ontime_cluster; SELECT COUNT(1) FROM ontime_cluster ┌─COUNT(1)─┐ │ 3900561 │ └──────────┘ 1 rows in set. Elapsed: 0.013 sec. -- 可以看到本地表数据量为一半左右 SELECT COUNT(1) FROM ontime_zk_local ozl ; SELECT COUNT(1) FROM ontime_zk_local AS ozl ┌─COUNT(1)─┐ │ 1951166 │ └──────────┘ 1 rows in set. Elapsed: 0.006 sec. -
在node1中往本地表
ontime_zk_local插入两百条记录,查看node1中分布式表和本地表数据量:INSERT into ontime_zk_local select * from ontime limit 200; -- 分布式表数据量 +200 SELECT COUNT(1) FROM ontime_cluster; SELECT COUNT(1) FROM ontime_cluster ┌─COUNT(1)─┐ │ 3900761 │ └──────────┘ 1 rows in set. Elapsed: 0.013 sec. -- 本地表数据量 +200 SELECT COUNT(1) FROM ontime_zk_local ozl ; SELECT COUNT(1) FROM ontime_zk_local AS ozl ┌─COUNT(1)─┐ │ 1951366 │ └──────────┘ 1 rows in set. Elapsed: 0.005 sec. -
node2中查看分布式表和本地表数据量,可以看出node1本地表的数据已经同步到node2:
-- 分布式表数据量 +200 node2 :) SELECT COUNT(1) FROM ontime_cluster; SELECT COUNT(1) FROM ontime_cluster ┌─COUNT(1)─┐ │ 3900761 │ └──────────┘ 1 rows in set. Elapsed: 0.013 sec. -- 本地表数据量 +200 node2 :) SELECT COUNT(1) FROM ontime_zk_local ozl ; SELECT COUNT(1) FROM ontime_zk_local AS ozl ┌─COUNT(1)─┐ │ 1951366 │ └──────────┘ 1 rows in set. Elapsed: 0.005 sec. -
node3中查看分布式表的数据量,不管查询多少此,数据也是正常的:
-- 分布式表数据量 +200 node3 :) SELECT COUNT(1) FROM ontime_cluster; SELECT COUNT(1) FROM ontime_cluster ┌─COUNT(1)─┐ │ 3900761 │ └──────────┘ 1 rows in set. Elapsed: 0.006 sec. -
综上,HA方案采用ReplicatedMergeTree+ Distributed更可靠。