上一篇中提到了使用gravity作分表,http://www.javashuo.com/article/p-zwiijkav-b.html html
另外還有個工具也比較好用,就是愛可生公司出品的 DTLE https://github.com/actiontech/dtlenode
關於這2個工具的差別,你們能夠自行參考官方文檔比對下,各有千秋吧。mysql
我的這2個工具都測試過:git
一、gravity支持的數據源更多,DTLE只支持MySQL,可是支持的姿式更豐富。
github
二、對於src和dest都是同一個實例的狀況下,要作分表只能使用gravity來作,DTLE不支持這種玩法。sql
###############################################json
DTLE相關的文檔:後端
https://actiontech.github.io/dtle-docs-cn/3/3.0_function_scenario_mapping.htmlapi
https://actiontech.github.io/dtle-docs-cn/1/1.0_mysql_replication.htmlbash
咱們這裏演示的是: 經過DTLE,將1個大的實例中某個大表,拆到2個獨立的實例裏面,作分庫分表(分庫分表後,還能夠結合愛可生的DBLE玩出更多花樣,本次就不涉及)。
原始庫:
# 演示用的帳號密碼都是 dts 192.168.2.4:3306 mysql -udts -pdts -h 192.168.2.4 --port 5725 testdb
2個分庫:
# 演示用的帳號密碼都是dts 192.168.2.4:5725 192.168.2.4:19226 mysql -udts -pdts -h 192.168.2.4 --port 5725 mysql -udts -pdts -h 192.168.2.4 --port 19226
原表:
create database testdb; use testdb; CREATE TABLE `dtle_t1` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id', `user_id` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '用戶id', `s_status` tinyint(1) unsigned NOT NULL DEFAULT '0' COMMENT '狀態', PRIMARY KEY (`id`), KEY `idx_uid` (`user_id`) USING BTREE ) COMMENT = '測試表' ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
須要按照user_id 作hash拆分,拆分到2個庫裏面
造些測試用的數據:
for i in {1..10000} ; do mysql -hdts -pdts -h 192.168.2.4 -e "insert into testdb.dtle_t1 (user_id,s_status) values (\"$RANDOM\",'0');" done
大體這樣:
mysql -udts -pdts -h 192.168.2.4 --port 5725 testdb [test] > select count(*) from dtle_t1 ; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (0.007 sec) [testdb] > select (user_id%2) as hash_id,count(*) FROM dtle_t1 group by (user_id%2); +---------+----------+ | hash_id | count(*) | +---------+----------+ | 0 | 5008 | | 1 | 4992 | +---------+----------+ 2 rows in set (0.009 sec)
在2個分庫上, 都執行上面的建表操做(貌似DTLE能自動建立,可是咱們這仍是人工建立下吧):
create database testdb; use testdb; CREATE TABLE `dtle_t1` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id', `user_id` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '用戶id', `s_status` tinyint(1) unsigned NOT NULL DEFAULT '0' COMMENT '狀態', PRIMARY KEY (`id`), KEY `idx_uid` (`user_id`) USING BTREE ) COMMENT = '測試表' ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
啓動DTLE進程:
cd /opt/dtle/ mkdir data/ ./bin/dtle server -config=./etc/dtle/dtle.conf
另開一個窗口,執行 查看 dtle 狀態的命令,效果以下:
curl -XGET "127.0.0.1:8190/v1/nodes" -s | jq . [ { "CreateIndex": 4, "Datacenter": "dc1", "HTTPAddr": "127.0.0.1:8190", "ID": "821e9f34-a3c3-f981-df41-8fbdc580708a", "ModifyIndex": 22, "Name": "dba-test-node-01", "Status": "ready", "StatusDescription": "" } ]
準備的 shard1.json 內容以下:
{ "Name":"dtle-shard1", "Tasks":[ { "Type":"Src", "Config":{ "ReplicateDoDb":[ { "TableSchema":"testdb", "Tables":[ { "TableName":"dtle_t1", "Where":"user_id%2=0" } ] } ], "ConnectionConfig":{ "Host":"192.168.2.4", "Port":"3306", "User":"dts", "Password":"dts" } } }, { "Type":"Dest", "Config":{ "ConnectionConfig":{ "Host":"192.168.2.4", "Port":"5725", "User":"dts", "Password":"dts" } } } ] }
準備的 shard2.json 內容以下:
{ "Name":"dtle-shard2", "Tasks":[ { "Type":"Src", "Config":{ "ReplicateDoDb":[ { "TableSchema":"testdb", "Tables":[ { "TableName":"dtle_t1", "Where":"user_id%2=1" } ] } ], "ConnectionConfig":{ "Host":"192.168.2.4", "Port":"3306", "User":"dts", "Password":"dts" } } }, { "Type":"Dest", "Config":{ "ConnectionConfig":{ "Host":"192.168.2.4", "Port":"19226", "User":"dts", "Password":"dts" } } } ] }
提交任務到DTLE:
curl -H "Accept:application/json" -XPOST "http://127.0.0.1:8190/v1/jobs" -d @shard1.json -s | jq . curl -H "Accept:application/json" -XPOST "http://127.0.0.1:8190/v1/jobs" -d @shard2.json -s | jq .
結果相似以下:
{
"Index": 56,
"KnownLeader": false,
"LastContact": 0,
"Success": true
}
稍等片刻,等數據同步,這時候能夠看到老的主庫上面的鏈接,能夠看到有2個GTID dump線程在運行:
[testdb] > show full processlist; +-------+------+-------------------+--------+------------------+------+---------------------------------------------------------------+-----------------------+ | Id | User | Host | db | Command | Time | State | Info | +-------+------+-------------------+--------+------------------+------+---------------------------------------------------------------+-----------------------+ | 43810 | root | localhost | testdb | Query | 0 | starting | show full processlist | | 43829 | root | localhost | testdb | Sleep | 25 | | NULL | | 43830 | dts | 192.168.2.4:38040 | NULL | Sleep | 293 | | NULL | | 43831 | dts | 192.168.2.4:38048 | NULL | Sleep | 293 | | NULL | | 43834 | dts | 192.168.2.4:38056 | NULL | Binlog Dump GTID | 292 | Master has sent all binlog to slave; waiting for more updates | NULL | | 43835 | dts | 192.168.2.4:38060 | NULL | Sleep | 290 | | NULL | | 43836 | dts | 192.168.2.4:38068 | NULL | Sleep | 290 | | NULL | | 43839 | dts | 192.168.2.4:38076 | NULL | Binlog Dump GTID | 289 | Master has sent all binlog to slave; waiting for more updates | NULL | +-------+------+-------------------+--------+------------------+------+---------------------------------------------------------------+-----------------------+ 8 rows in set (0.000 sec)
而後,查看下分庫的數據:
[root@dba-test-node-01 /opt/dtle ] # mysql -udts -pdts -h 192.168.2.4 --port 5725 testdb -e "select count(*) from dtle_t1;" +----------+ | count(*) | +----------+ | 5008 | +----------+ [root@dba-test-node-01 /opt/dtle ] # mysql -udts -pdts -h 192.168.2.4 --port 19226 testdb -e "select count(*) from dtle_t1;" +----------+ | count(*) | +----------+ | 4992 | +----------+
咱們這裏也能夠再老的主庫,人工再插入幾條測試數據看看是否會自動分流到後端不一樣的分片上面,我這裏就不繼續演示了。
列出當前的job:
curl -XGET "127.0.0.1:8190/v1/jobs" | jq .
列出某做業的全部任務執行
curl -XGET "127.0.0.1:8190/v1/job/8e928a6b-e2be-c7b7-0d4a-745163c87282/allocations" | jq . curl -XGET "127.0.0.1:8190/v1/job/e8780526-9464-9df9-61f2-48c70a991024/allocations" | jq .
刪除一個做業:
curl -H "Accept:application/json" -XDELETE "127.0.0.1:8190/v1/job/4079eaba-bcec-08b1-5ac2-78aa4a21a49b"
相關文檔: https://actiontech.github.io/dtle-docs-cn/4/4.4_http_api.html