TiUP部署TiDB集群

419 阅读3分钟

中控机安装TiUP

以普通用户( tidb )登陆中控机

  1. 安装TiUP工具

    curl -proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
    
  2. 设置TiUP环境变量

    # 重新声明全局变量
    [tidb@test ~]$ source .bash_profile
    # 确认TiUP工具是否安装
    [tidb@test ~]$ which tiup
    ~/.tiup/bin/tiup
    # 安装TiUP cluster组件
    [tidb@test ~]$ tiup cluster
    The component `cluster` is not installed; downloading from repository.
    download https://tiup-mirrors.pingcap.com/cluster-v1.2.5-linux-amd64.tar.gz 10.00 MiB / 10.00 MiB 100.00% ? p/s
    Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.2.5/tiup-cluster
    Deploy a TiDB cluster for production
    
    Usage:
      tiup cluster [command]
    ······
    
  3. 更新 TiUP cluster 组件至最新版本并验证版本信息

    # 预期输出 `Updated successfully!`
    [tidb@test ~]$ tiup update --self && tiup update cluster
    download https://tiup-mirrors.pingcap.com/tiup-v1.2.5-linux-amd64.tar.gz 8.41 MiB / 8.41 MiB 100.00% ? p/s
    Updated successfully!
    component cluster version v1.2.5 is already installed
    Updated successfully!
    
    # 验证当前TiUP cluster版本信息
    [tidb@test ~]$ tiup --binary cluster
    /home/tidb/.tiup/components/cluster/v1.2.5/tiup-cluster
    

编辑初始化配置文件

topology.yaml 文件

组件包括

  • TiDB-server
  • TiKV-server
  • PD-server
  • TiFlash列式存储引擎
  • TiCDC增量数据同步工具,支持多种下游(TiDB/MySQL/MQ)
  • TiDB Binlog增量同步组件,可提供准实时备份和同步功能
  • TiSpark解决OLAP需求,TiUP cluster组件对TiSpark的支持目前为实验性特性

最小集群拓扑,包括TiDB-server,TiKV-server,PD-server,适合 OLTP 业务

相比于 TiDB binglogTiCDC延迟更低、天然高可用等特点

# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  user: "tidb"
  group: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
 
# # Monitored variables are applied to all the machines.
monitored:
  # node_exporter_port: 9100
  # blackbox_exporter_port: 9115
  # deploy_dir: "/tidb-deploy/monitored-9100"
  # data_dir: "/tidb-data/monitored-9100"
  # log_dir: "/tidb-deploy/monitored-9100/log"
 
# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # All configuration items use points to represent the hierarchy, e.g:
# #   readpool.storage.use-unified-pool
# #     
# # You can overwrite this configuration via the instance-level `config` field.
 
server_configs:
  tidb:
    log.slow-threshold: 300
    binlog.enable: false
    binlog.ignore-error: false
  tikv:
    # server.grpc-concurrency: 4
    # raftstore.apply-pool-size: 2
    # raftstore.store-pool-size: 2
    # rocksdb.max-sub-compactions: 1
    # storage.block-cache.capacity: "16GB"
    # readpool.unified.max-thread-count: 12
    readpool.storage.use-unified-pool: false
    readpool.coprocessor.use-unified-pool: true
  pd:
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64
    replication.enable-placement-rules: true
  tiflash:
    # Maximum memory usage for processing a single query. Zero means unlimited.
    profiles.default.max_memory_usage: 10000000000
    # Maximum memory usage for processing all concurrently running queries on the server. Zero means unlimited.
    profiles.default.max_memory_usage_for_all_queries: 0
 
pd_servers:
  - host: 172.20.22.101
    # ssh_port: 22
    # name: "pd-1"
    # client_port: 2379
    # peer_port: 2380
    # deploy_dir: "/tidb-deploy/pd-2379"
    # data_dir: "/tidb-data/pd-2379"
    # log_dir: "/tidb-deploy/pd-2379/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.pd` values.
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000
  # - host: 10.0.1.5
  # - host: 10.0.1.6
 
tidb_servers:
  - host: 172.20.22.101
    # ssh_port: 22
    # port: 4000
    # status_port: 10080
    # deploy_dir: "/tidb-deploy/tidb-4000"
    # log_dir: "/tidb-deploy/tidb-4000/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tidb` values.
    # config:
    #   log.slow-query-file: tidb-slow-overwrited.log
  # - host: 10.0.1.2
  # - host: 10.0.1.3
 
tikv_servers:
  - host: 172.20.22.101
    # ssh_port: 22
    # port: 20160
    # status_port: 20180
    # deploy_dir: "/tidb-deploy/tikv-20160"
    # data_dir: "/tidb-data/tikv-20160"
    # log_dir: "/tidb-deploy/tikv-20160/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tikv` values.
    # config:
    #   server.grpc-concurrency: 4
    #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
  # - host: 10.0.1.8
  # - host: 10.0.1.9
 
tiflash_servers:
  - host: 172.20.22.101
    data_dir: /tidb-data/tiflash-9000
    deploy_dir: /tidb-deploy/tiflash-9000
    # ssh_port: 22
    # tcp_port: 9000
    # http_port: 8123
    # flash_service_port: 3930
    # flash_proxy_port: 20170
    # flash_proxy_status_port: 20292
    # metrics_port: 8234
    # deploy_dir: /tidb-deploy/tiflash-9000
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tiflash` values.
    # config:
    #   logger.level: "info"
    # learner_config:
    #   log-level: "info"
  # - host: 10.0.1.12
  # - host: 10.0.1.13
 
cdc_servers:
  - host: 172.20.22.101
    port: 8300
    deploy_dir: "/tidb-deploy/cdc-8300"
    log_dir: "/tidb-deploy/cdc-8300/log"
  # - host: 10.0.1.2
  #   port: 8300
  #   deploy_dir: "/tidb-deploy/cdc-8300"
  #   log_dir: "/tidb-deploy/cdc-8300/log"
  # - host: 10.0.1.3
  #   port: 8300
  #   deploy_dir: "/tidb-deploy/cdc-8300"
  #   log_dir: "/tidb-deploy/cdc-8300/log"
 
# pump_servers:
#   - host: 10.0.1.1
#     ssh_port: 22
#     port: 8250
#     deploy_dir: "/tidb-deploy/pump-8249"
#     data_dir: "/tidb-data/pump-8249"
#     # The following configs are used to overwrite the `server_configs.drainer` values.
#     config:
#       gc: 7
#   - host: 10.0.1.2
#     ssh_port: 22
#     port: 8250
#     deploy_dir: "/tidb-deploy/pump-8249"
#     data_dir: "/tidb-data/pump-8249"
#     # The following configs are used to overwrite the `server_configs.drainer` values.
#     config:
#       gc: 7
#   - host: 10.0.1.3
#     ssh_port: 22
#     port: 8250
#     deploy_dir: "/tidb-deploy/pump-8249"
#     data_dir: "/tidb-data/pump-8249"
#     # The following configs are used to overwrite the `server_configs.drainer` values.
#     config:
#       gc: 7
# drainer_servers:
#   - host: 10.0.1.12
#     port: 8249
#     data_dir: "/tidb-data/drainer-8249"
#     # If drainer doesn't have a checkpoint, use initial commitTS as the initial checkpoint.
#     # Will get a latest timestamp from pd if commit_ts is set to -1 (the default value).
#     commit_ts: -1
#     deploy_dir: "/tidb-deploy/drainer-8249"
#     # The following configs are used to overwrite the `server_configs.drainer` values.
#     config:
#       syncer.db-type: "tidb"
#       syncer.to.host: "10.0.1.12"
#       syncer.to.user: "root"
#       syncer.to.password: ""
#       syncer.to.port: 4000
 
monitoring_servers:
  # - host: 10.0.1.10
    # ssh_port: 22
    # port: 9090
    # deploy_dir: "/tidb-deploy/prometheus-8249"
    # data_dir: "/tidb-data/prometheus-8249"
    # log_dir: "/tidb-deploy/prometheus-8249/log"
 
grafana_servers:
  # - host: 10.0.1.10
    # port: 3000
    # deploy_dir: /tidb-deploy/grafana-3000
 
alertmanager_servers:
  # - host: 10.0.1.10
    # ssh_port: 22
    # web_port: 9093
    # cluster_port: 9094
    # deploy_dir: "/tidb-deploy/alertmanager-9093"
    # data_dir: "/tidb-data/alertmanager-9093"
    # log_dir: "/tidb-deploy/alertmanager-9093/log"

执行部署命令

如果之前在目标机器以及配置了 tidb 用户,可在命令行上使用了参数 --skip-create-user 明确指定跳过创建用户的步骤。

tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --skip-create-user --user root -p

示例中的命令解释

  • 指定部署的集群名称为 tidb-test
  • 部署版本为 v4.0.0,最新版本通过 tiup list tidb 查看 tidb 支持的版本
  • 初始化配置文件为topology.yaml
  • --user root :通过 root 用户登陆到目标主机完成集群部署,该用户需要有 ssh 到目标机器的全县,并且在目标机器有 sudo 权限。
  • -i 以及 -p :非必选项,如果配置免密登陆目标机器,不需要填写。 -i 是使用 root 用户的私钥,-p 交互式输入该用户的密码。
[tidb@test ~]$ tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --skip-create-user --user root -p
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.2.5/tiup-cluster deploy tidb-test v4.0.0 ./topology.yaml --skip-create-user --user root -p
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v4.0.0
Type     Host           Ports                            OS/Arch       Directories
----     ----           -----                            -------       -----------
pd       172.20.22.101  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv     172.20.22.101  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb     172.20.22.101  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash  172.20.22.101  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
cdc      172.20.22.101  8300                             linux/x86_64  /tidb-deploy/cdc-8300
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.0 (linux/amd64) ... Done
  - Download tikv:v4.0.0 (linux/amd64) ... Done
  - Download tidb:v4.0.0 (linux/amd64) ... Done
  - Download tiflash:v4.0.0 (linux/amd64) ... Done
  - Download cdc:v4.0.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.20.22.101:22 ... Done
+ Copy files
  - Copy pd -> 172.20.22.101 ... Done
  - Copy tikv -> 172.20.22.101 ... Done
  - Copy tidb -> 172.20.22.101 ... Done
  - Copy tiflash -> 172.20.22.101 ... Done
  - Copy cdc -> 172.20.22.101 ... Done
  - Copy node_exporter -> 172.20.22.101 ... Done
  - Copy blackbox_exporter -> 172.20.22.101 ... Done
+ Check status
 
Enabling component pd
+ Enable cluster
    Enable pd 172.20.22.101:2379 success
+ Enable cluster
+ Enable cluster
Enabling component tikv
+ Enable cluster
    Enable tikv 172.20.22.101:20160 success
Enabling component tidb
+ Enable cluster
    Enable tidb 172.20.22.101:4000 success
Enabling component tiflash
+ Enable cluster
    Enable tiflash 172.20.22.101:9000 success
Enabling component cdc
+ Enable cluster
+ Enable cluster
Deployed cluster `tidb-test` successfully, you can start the cluster via `tiup cluster start tidb-test`

预期日志会打印如下关键词,表示部署成功

Deployed cluster `tidb-test` successfully, you can start the cluster via `tiup cluster start tidb-test`

查看TiUP管理的集群情况

tiup cluster list
[tidb@test ~]$ tiup cluster list
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.2.5/tiup-cluster list
Name       User  Version  Path                                                 PrivateKey
----       ----  -------  ----                                                 ----------
tidb-test  tidb  v4.0.0   /home/tidb/.tiup/storage/cluster/clusters/tidb-test  /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa

检查部署的集群情况

tiup cluster display tidb-test

预期输出包括 tidb-test 集群中的实例ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息

[tidb@test ~]$ tiup cluster display tidb-test
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.2.5/tiup-cluster display tidb-test
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v4.0.0
SSH type:        builtin
ID                   Role     Host           Ports                            OS/Arch       Status  Data Dir                 Deploy Dir
--                   ----     ----           -----                            -------       ------  --------                 ----------
172.20.22.101:8300   cdc      172.20.22.101  8300                             linux/x86_64  Down    -                        /tidb-deploy/cdc-8300
172.20.22.101:2379   pd       172.20.22.101  2379/2380                        linux/x86_64  Down    /tidb-data/pd-2379       /tidb-deploy/pd-2379
172.20.22.101:4000   tidb     172.20.22.101  4000/10080                       linux/x86_64  Down    -                        /tidb-deploy/tidb-4000
172.20.22.101:9000   tiflash  172.20.22.101  9000/8123/3930/20170/20292/8234  linux/x86_64  Down    /tidb-data/tiflash-9000  /tidb-deploy/tiflash-9000
172.20.22.101:20160  tikv     172.20.22.101  20160/20180                      linux/x86_64  Down    /tidb-data/tikv-20160    /tidb-deploy/tikv-20160
Total nodes: 5

启动集群

tiup cluster start tidb-test

输出以下内容表示启动成功

Started cluster `tidb-test` successfully

验证集群运行状态

tiup cluster display tidb-test

预期输出的状态(Status)信息为 Up 说明集群状态正常

通过 MySQL client 进入数据库

mysql -uroot -h 172.20.22.101 -P 4000

参考

官网部署教程 docs.pingcap.com/zh/tidb/dev…