[toc]前端
MyCat是一個開源的分佈式數據庫系統,是一個實現了MySQL協議的服務器,前端用戶能夠把它看做是一個數據庫代理,用MySQL客戶端工具和命令行訪問,而其後端能夠用MySQL原生協議與多個MySQL服務器通訊,也能夠用JDBC協議與大多數主流數據庫服務器通訊,其核心功能是分表分庫,即將一個大表水平分割爲N個小表,存儲在後端MySQL服務器裏或者其餘數據庫裏。java
MyCat發展到目前的版本,已經不是一個單純的MySQL代理了,它的後端能夠支持MySQL、SQL Server、Oracle、DB二、PostgreSQL等主流數據庫,也支持MongoDB這種新型NoSQL方式的存儲,將來還會支持更多類型的存儲。而在最終用戶看來,不管是那種存儲方式,在MyCat裏,都是一個傳統的數據庫表,支持標準的SQL語句進行數據的操做,這樣一來,對前端業務系統來講,能夠大幅下降開發難度,提高開發速度node
具體介紹請參考: mycat官方網站mysql
OS: CentOS Linux release 7.2.1511 (Core)linux
序號 | 主機 | IP | 部署內容 |
---|---|---|---|
1 | mycat1 | 10.9.37.226 | mycat節點,使用xinetd進行mycat運行狀態監測 |
2 | mycat2 | 10.9.18.108 | mycat節點,使用xinetd進行mycat運行狀態監測 |
3 | keep1 | 10.9.71.4 | 部署keepalived + haproxy,主節點 |
4 | keep2 | 10.9.3.180 | 部署keepalived + haproxy,備節點 |
5 | VIP | 10.9.78.178 | 準備一個同網段未被佔用的ip地址用做VIP |
6 | mysql | 10.9.54.71 | 準備一個可用的mysql庫,而且建立兩個database(db1,db2) |
graph LR app[應用] console[運維] VIP[VIP] keep1[keep1+ha1 主] keep2[keep2+ha2 備] mycat1[mycat1] mycat2[mycat2] mycat3[mycat運維節點] mysql1((主庫)) mysql1-slave1((從庫1)) mysql1-slave2((從庫2)) app-->VIP console--腳本發佈-->mycat3 mycat3-->mysql1 subgraph ha VIP-->keep1 VIP-.->keep2 end subgraph mycat keep1-->mycat1 keep2-->mycat2 keep1-->mycat2 keep2-->mycat1 end subgraph mysql mycat1--寫-->mysql1 mycat2--寫-->mysql1 mycat1-.讀.->mysql1-slave1 mycat2-.讀.->mysql1-slave2 mysql1-.半同步.->mysql1-slave1 mysql1-.半同步.->mysql1-slave2 end
# 安裝jdk1.8: yum install -y java-1.8.0-openjdk.x86_64 # 驗證java是否安裝成功: java -version
# 建立軟件存放目錄 mkdir /var/workspace/ cd /var/workspace/ # 下載mycat wget http://dl.mycat.io/1.6.7.4/Mycat-server-1.6.7.4-release/Mycat-server-1.6.7.4-release-20200105164103-linux.tar.gz # 解壓 tar -zxvf Mycat-server-1.6.7.4-release-20200105164103-linux.tar.gz # 設置mycat環境變量 echo 'MYCAT_HOME=/var/workspace/mycat' >> /etc/profile source /etc/profile
/var/workspace/mycat/conf/schema.xml中定義了後端數據源、邏輯庫/表等。nginx
dataHost:Mycat後端鏈接的數據庫(物理庫,真實的數據庫),即後端數據源,能夠是mysql、oracle等。redis
schema:邏輯庫,能夠看作是一個或多個後端數據庫集羣構成的邏輯庫。sql
dataNode:數據分片,邏輯庫的構成,即一個邏輯庫可能分爲多個分片,數據分片最終會落在一個或多個物理庫。shell
以下配置有一個dataHost對應後端一個mysql實例,mysql建立了兩個database,db1和db2。一個邏輯庫sbux中有一張sbux_users邏輯表,表中的數據分爲2片,分別落在db1和db2上。數據庫
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="sbux" checkSQLschema="true" sqlMaxLimit="100" randomDataNode="dn1"> <!-- auto sharding by id (long) --> <!--splitTableNames 啓用<table name 屬性使用逗號分割配置多個表,即多個表使用這個配置--> <table name="sbux_users" dataNode="dn1,dn2" rule="auto-sharding-long" splitTableNames ="true"/> </schema> <!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743"/> --> <dataNode name="dn1" dataHost="mysql1" database="db1" /> <dataNode name="dn2" dataHost="mysql1" database="db2" /> <!--<dataNode name="dn3" dataHost="mysql1" database="sbux" />--> <dataHost name="mysql1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <!-- can have multi write hosts --> <writeHost host="hostM1" url="10.9.54.71:3306" user="sbux_app" password="Abc@12345"></writeHost> <!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> --> </dataHost> </mycat:schema>
以上配置中schema sbux使用了auto-sharding-long分片策略,分片策略在/var/workspace/mycat/conf/rule.xml中定義,以下
<tableRule name="auto-sharding-long"> <rule> <columns>id</columns> <algorithm>rang-long</algorithm> </rule> </tableRule> <function name="rang-long" class="io.mycat.route.function.AutoPartitionByLong"> <!--使用區間分片,即按數值的範圍路由數據,如0-500在第一個分片,500-100在第二個分片--> <property name="mapFile">autopartition-long.txt</property> </function>
該配置中指定了autopartition-long.txt文件爲策略記錄,選擇的策略是按數值的範圍路由數據,即0-500在第一個分片,500-100在第二個分片
# range start-end ,data node index # K=1000,M=10000. 0-500=0 500-1000=1
/var/workspace/mycat/conf/server.xml中定義mycat的系統配置和用戶等。能夠經過xml新增和修改mycat用戶信息。
如下爲schema:sbux添加了兩個用戶root和sbux_app,應用使用這兩個帳號鏈接mycat進行數據的操做。
<user name="root" defaultAccount="true"> <property name="password">123456</property> <property name="schemas">sbux</property> <property name="defaultSchema">sbux</property> <!--No MyCAT Database selected 錯誤前會嘗試使用該schema做爲schema,不設置則爲null,報錯 --> <!-- 表級 DML 權限設置 --> <!-- <privileges check="false"> <schema name="TESTDB" dml="0110" > <table name="tb01" dml="0000"></table> <table name="tb02" dml="1111"></table> </schema> </privileges> --> </user> <user name="sbux_app"> <property name="password">Abc@12345</property> <property name="schemas">sbux</property> <property name="readOnly">true</property> <property name="defaultSchema">sbux</property> </user>
# mycat進程啓動 [root@mycat1 ~]# /var/workspace/mycat/bin/mycat start Mycat-server is running (20658). # mycat運行端口查看 [root@mycat1 ~]# netstat -an |grep 8066 tcp 0 0 0.0.0.0:8066 0.0.0.0:* LISTEN # mycat管理端口查看 [root@mycat1 ~]# netstat -an |grep 9066 tcp 0 0 0.0.0.0:9066 0.0.0.0:* LISTEN
使用mysql客戶端鏈接mycat
[root@mycat1 ~]# /var/workspace/mysql-5.7.29-linux-glibc2.12-x86_64/bin/mysql -P8066 -uroot --protocol TCP -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 187 Server version: 5.6.29-mycat-1.6.7.4-release-20200105164103 MyCat Server (OpenCloudDB) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +----------+ | DATABASE | +----------+ | sbux | +----------+ 1 row in set (0.00 sec)
管理命令須要鏈接mycat的9066默認端口
mysql> show @@help; +------------------------------------------------+--------------------------------------------+ | STATEMENT | DESCRIPTION | +------------------------------------------------+--------------------------------------------+ | show @@time.current | Report current timestamp | | show @@time.startup | Report startup timestamp | | show @@version | Report Mycat Server version | | show @@server | Report server status | | show @@threadpool | Report threadPool status | | show @@database | Report databases | | show @@datanode | Report dataNodes | | show @@datanode where schema = ? | Report dataNodes | | show @@datasource | Report dataSources | | show @@datasource where dataNode = ? | Report dataSources | | show @@datasource.synstatus | Report datasource data synchronous | | show @@datasource.syndetail where name=? | Report datasource data synchronous detail | | show @@datasource.cluster | Report datasource galary cluster variables | | show @@processor | Report processor status | | show @@command | Report commands status | | show @@connection | Report connection status | | show @@cache | Report system cache usage | | show @@backend | Report backend connection status | | show @@session | Report front session details | | show @@connection.sql | Report connection sql | | show @@sql.execute | Report execute status | | show @@sql.detail where id = ? | Report execute detail status | | show @@sql | Report SQL list | | show @@sql.high | Report Hight Frequency SQL | | show @@sql.slow | Report slow SQL | | show @@sql.resultset | Report BIG RESULTSET SQL | | show @@sql.sum | Report User RW Stat | | show @@sql.sum.user | Report User RW Stat | | show @@sql.sum.table | Report Table RW Stat | | show @@parser | Report parser status | | show @@router | Report router status | | show @@heartbeat | Report heartbeat status | | show @@heartbeat.detail where name=? | Report heartbeat current detail | | show @@slow where schema = ? | Report schema slow sql | | show @@slow where datanode = ? | Report datanode slow sql | | show @@sysparam | Report system param | | show @@syslog limit=? | Report system mycat.log | | show @@white | show mycat white host | | show @@white.set=?,? | set mycat white host,[ip,user] | | show @@directmemory=1 or 2 | show mycat direct memory usage | | show @@check_global -SCHEMA= ? -TABLE=? -retry=? -interval=? | check mycat global table consistency | | switch @@datasource name:index | Switch dataSource | | kill @@connection id1,id2,... | Kill the specified connections | | stop @@heartbeat name:time | Pause dataNode heartbeat | | reload @@config | Reload basic config from file | | reload @@config_all | Reload all config from file | | reload @@route | Reload route config from file | | reload @@user | Reload user config from file | | reload @@sqlslow= | Set Slow SQL Time(ms) | | reload @@user_stat | Reset show @@sql @@sql.sum @@sql.slow | | rollback @@config | Rollback all config from memory | | rollback @@route | Rollback route config from memory | | rollback @@user | Rollback user config from memory | | reload @@sqlstat=open | Open real-time sql stat analyzer | | reload @@sqlstat=close | Close real-time sql stat analyzer | | offline | Change MyCat status to OFF | | online | Change MyCat status to ON | | clear @@slow where schema = ? | Clear slow sql by schema | | clear @@slow where datanode = ? | Clear slow sql by datanode | +------------------------------------------------+--------------------------------------------+ 59 rows in set (0.00 sec)
-Xmx1024m -Xmn512m -XX:MaxDirectMemorySize=2048m -Xss256K -XX:+UseParallelGC
使用xinetd檢測mycat進程是否正常,提供http接口給haproxy探測mycat運行狀況。
yum install -y xinetd
建立/var/workspace/mycat_status 文件,內容以下:
#!/bin/bash #/usr/local/bin/mycat_status.sh # This script checks if a mycat server is healthy running on localhost. It will # return: # "HTTP/1.x 200 OK\r" (if mycat is running smoothly) # "HTTP/1.x 503 Internal Server Error\r" (else) mycat=`/var/workspace/mycat/bin/mycat status |grep 'not running'| wc -l` if [ "$mycat" = "0" ];then /bin/echo -e "HTTP/1.1 200 OK\r\n" else /bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n" fi
添加sh執行權限chmod 755 /var/workspace/mycat_status
在xinetd配置中加入mycat的信息,建立文件/etc/xinetd.d/mycat_status。在48700端口啓動服務,檢測mycat進程是否存在並將結果返回,具體配置以下:
service mycat_status { flags = REUSE socket_type = stream port = 48700 wait = no user = root server = /var/workspace/mycat_status log_on_failure +=USERID disable = no }
[root@mycat1 ~]# systemctl restart xinetd [root@mycat1 ~]# systemctl status xinetd ● xinetd.service - Xinetd A Powerful Replacement For Inetd Loaded: loaded (/usr/lib/systemd/system/xinetd.service; enabled; vendor preset: enabled) Active: active (running) since 日 2020-02-23 14:31:43 CST; 4 days ago Process: 27014 ExecStart=/usr/sbin/xinetd -stayalive -pidfile /var/run/xinetd.pid $EXTRAOPTIONS (code=exited, status=0/SUCCESS) Main PID: 27015 (xinetd) CGroup: /system.slice/xinetd.service └─27015 /usr/sbin/xinetd -stayalive -pidfile /var/run/xinetd.pid
[root@mycat1 ~]# curl -i http://localhost:48700 HTTP/1.1 200 OK
返回200說明mycat服務是正常的,返回503表明mycat進程中止了,服務異常。
備節點搭建請參考如上步驟,徹底同樣,這裏再也不重複。
# 使用yum安裝keepalived yum install -y keepalived # 建立keepalived相關日誌目錄 mkdir -p /var/workspace/log/keepalived
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs { notification_email { # 要通知的郵箱地址 sysadmin@firewall.loc } # 發送的郵箱 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 # router_id主和備不能同樣 router_id keep1 #vrrp_skip_check_adv_addr #vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } # 按期調用check_haproxy.sh腳本檢查mycat運行狀態 vrrp_script chk_http_port { script "/etc/keepalived/scripts/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER # 備服務器用:BACKUP # VIP綁定的網卡 interface eth0 # 主備必須同樣 virtual_router_id 168 # 備服務器priority要低於主的 priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { # 此ip爲VIP,請自行修改 10.9.78.178/16 dev eth0 scope global } track_script { # 調用腳本 check_haproxy.sh 檢查 haproxy 是否存活(mycat剛剛加的那個服務) chk_http_port } # keepalived狀態變化時執行相應腳本 # 當此keepalived節點變爲主時執行 notify_master /etc/keepalived/scripts/haproxy_master.sh # 當此keepalived節點變爲備時執行 notify_backup /etc/keepalived/scripts/haproxy_backup.sh # 當此keepalived節點fault時執行 notify_fault /etc/keepalived/scripts/haproxy_fault.sh # 當此keepalived節點中止時執行 notify_stop /etc/keepalived/scripts/haproxy_stop.sh }
/etc/keepalived/scripts/haproxy_master.sh,當本節點變爲主節點時調用
#!/bin/bash STARTHAPROXY="/bin/systemctl restart haproxy" # 檢測haproxy進程已存在就kill掉 STOPHAPROXY=`ps -ef |grep /usr/sbin/haproxy | grep -v grep|awk '{print $2}'|xargs kill -s 9` LOGFILE="/var/workspace/log/keepalived/keepalived-haproxy-state.log" echo "[master]" >> $LOGFILE date >> $LOGFILE echo "Being master...." >> $LOGFILE 2>&1 echo "stop haproxy...." >> $LOGFILE 2>&1 $STOPHAPROXY >> $LOGFILE 2>&1 echo "start haproxy...." >> $LOGFILE 2>&1 $STARTHAPROXY >> $LOGFILE 2>&1 echo "haproxy stared ..." >> $LOGFILE
/etc/keepalived/scripts/haproxy_backup.sh,當本節點變爲備節點時調用
#!/bin/bash STARTHAPROXY="/bin/systemctl restart haproxy" STOPHAPROXY=`ps -ef |grep /usr/sbin/haproxy | grep -v grep|awk '{print $2}'|xargs kill -s 9` LOGFILE="/var/workspace/log/keepalived/keepalived-haproxy-state.log" echo "[backup]" >> $LOGFILE date >> $LOGFILE echo "Being backup...." >> $LOGFILE 2>&1 echo "stop haproxy...." >> $LOGFILE 2>&1 $STOPHAPROXY >> $LOGFILE 2>&1 echo "start haproxy...." >> $LOGFILE 2>&1 $STARTHAPROXY >> $LOGFILE 2>&1 echo "haproxy stared ..." >> $LOGFILE
/etc/keepalived/scripts/haproxy_fault.sh,當fault時調用
#!/bin/bash LOGFILE=/var/workspace/log/keepalived/keepalived-haproxy-state.log echo "[fault]" >> $LOGFILE date >> $LOGFILE
/etc/keepalived/scripts/haproxy_stop.sh,當節點中止時調用
#!/bin/bash LOGFILE=/var/workspace/log/keepalived/keepalived-haproxy-state.log echo "[stop]" >> $LOGFILE date >> $LOGFILE
systemctl restart keepalived
# 查看主節點網卡上綁定的VIP [root@keep1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:f2:8c:e1 brd ff:ff:ff:ff:ff:ff inet 10.9.71.4/16 brd 10.9.255.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.9.78.178/16 scope global secondary eth0 valid_lft forever preferred_lft forever # 查看主節點vrrp組播 [root@keep1 ~]# tcpdump -i eth0 -n -p vrrp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 23:08:54.298521 IP 10.9.71.4 > 224.0.0.18: VRRPv2, Advertisement, vrid 168, prio 200, authtype simple, intvl 1s, length 20
inet 10.9.78.178/16 scope global secondary eth0
爲keepalived新增的虛ip。
以上步驟相同,配置文件以下
! Configuration File for keepalived global_defs { notification_email { # 要通知的郵箱地址 sysadmin@firewall.loc } # 發送的郵箱 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 # router_id主和備不能同樣 router_id keep2 #vrrp_skip_check_adv_addr #vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } # 按期調用check_haproxy.sh腳本檢查mycat運行狀態 vrrp_script chk_http_port { script "/etc/keepalived/scripts/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { state BACKUP # 備服務器用:BACKUP # VIP綁定的網卡 interface eth0 # 主備必須同樣 virtual_router_id 168 # 備服務器priority要低於主的 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { # 此ip爲VIP,請自行修改 10.9.78.178/16 dev eth0 scope global } track_script { # 調用腳本 check_haproxy.sh 檢查 haproxy 是否存活(mycat剛剛加的那個服務) chk_http_port } # keepalived狀態變化時執行相應腳本 # 當此keepalived節點變爲主時執行 notify_master /etc/keepalived/scripts/haproxy_master.sh # 當此keepalived節點變爲備時執行 notify_backup /etc/keepalived/scripts/haproxy_backup.sh # 當此keepalived節點fault時執行 notify_fault /etc/keepalived/scripts/haproxy_fault.sh # 當此keepalived節點中止時執行 notify_stop /etc/keepalived/scripts/haproxy_stop.sh }
yum install -y haproxy
/etc/haproxy/haproxy.cfg
#--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local0 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close #option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 60s timeout server 60s timeout http-keep-alive 10s timeout check 10s timeout tunnel 1h timeout client-fin 30s maxconn 3000 # 綁定到keep1主機的本地網卡上,該頁面爲haproxy的統計頁面,用於查看請求轉發等狀態 listen admin_status 0.0.0.0:48800 ##VIP stats uri /admin-status ##統計頁面 stats auth admin:admin mode http option httplog # 轉發到mycat 8066業務端口,服務綁定在VIP上 listen allmycat_service 10.9.78.178:8066 ##轉發到 mycat 的 8066 端口,即 mycat 的服務端口 mode tcp option tcplog option tcpka option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www balance roundrobin server mycat_1 10.9.37.226:8066 check port 48700 inter 5s rise 2 fall 3 server mycat_2 10.9.18.108:8066 check port 48700 inter 5s rise 2 fall 3 timeout server 20000 # 轉發到mycat 9066管理端口,服務綁定在VIP上 listen allmycat_admin 10.9.78.178:9066 ##轉發到 mycat 的 9066 端口,及 mycat 的管理控制檯端口 mode tcp option tcplog option tcpka option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www balance roundrobin # 48700爲mycat主機利用xinetd提供的狀態check服務,5s檢查一次 server mycat_1 10.9.37.226:9066 check port 48700 inter 5s rise 2 fall 3 server mycat_2 10.9.18.108:9066 check port 48700 inter 5s rise 2 fall 3 timeout server 20000
systemctl start haproxy
默認狀況haproxy是不記錄日誌的,爲了記錄日誌還須要配置 syslog 模塊,在 linux 下是 rsyslogd 服務
yum install –y rsyslog
修改/etc/rsyslog.conf配置文件最後加上
local0.* /var/log/haproxy.log
新增文件 /etc/rsyslog.d/haproxy.conf,內容以下
$ModLoad imudp $UDPServerRun 514 local0.* /var/log/haproxy.log
重啓
systemctl restart rsyslog systemctl restart haproxy
查看日誌 /var/log/haproxy.log
備節點搭建與主節點相同
登錄任意同一局域網主機(須要有mysql客戶端工具)
# ping VIP能夠通 [root@tools ~]# ping 10.9.78.178 PING 10.9.78.178 (10.9.78.178) 56(84) bytes of data. 64 bytes from 10.9.78.178: icmp_seq=1 ttl=63 time=91.3 ms # 查看主機的arp列表 [root@tools ~]# arp Address HWtype HWaddress Flags Mask Iface 10.9.78.178 ether 52:54:00:82:5e:87 C eth0 # 經過VIP鏈接到了mycat集羣的管理端口 [root@tools ~]# /var/workspace/mysql-5.7.29-linux-glibc2.12-x86_64/bin/mysql -h 10.9.78.178 -P9066 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 196 Server version: 5.6.29-mycat-1.6.7.4-release-20200105164103 MyCat Server (OpenCloudDB) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. # 查看mycat鏈接 mysql> show @@connection; +------------+------+------------+------+------------+------+--------+---------+--------+---------+---------------+-------------+------------+---------+------------+ | PROCESSOR | ID | HOST | PORT | LOCAL_PORT | USER | SCHEMA | CHARSET | NET_IN | NET_OUT | ALIVE_TIME(S) | RECV_BUFFER | SEND_QUEUE | txlevel | autocommit | +------------+------+------------+------+------------+------+--------+---------+--------+---------+---------------+-------------+------------+---------+------------+ | Processor1 | 200 | 10.9.3.180 | 9066 | 45996 | root | NULL | utf8:33 | 165 | 892 | 19 | 4096 | 0 | | | +------------+------+------------+------+------------+------+--------+---------+--------+---------+---------------+-------------+------------+---------+------------+ 1 row in set (0.00 sec) mysql>
[root@tools ~]# /var/workspace/mysql-5.7.29-linux-glibc2.12-x86_64/bin/mysql -h 10.9.78.178 -P8066 -uroot -p Enter password: mysql> use sbux Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> CREATE TABLE sbux_users ( id BIGINT ( 20 ) NOT NULL, `name` VARCHAR ( 20 ) NOT NULL, `desc` VARCHAR ( 100 ), PRIMARY KEY ( id ) ) ENGINE = INNODB DEFAULT CHARSET = utf8mb4; Query OK, 0 rows affected (0.01 sec) mysql> INSERT INTO sbux_users(id,`name`,`desc`) VALUES(1,'zhangs1','goods1'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO sbux_users(id,`name`,`desc`) VALUES(501,'zhangs501','goods501'); Query OK, 1 row affected (0.01 sec) mysql> select * from sbux_users; +-----+---------+--------+ | ID | NAME | DESC | +-----+---------+--------+ | 1 | zhangs1 | goods1 | | 501 | zhangs4 | goods4 | +-----+---------+--------+ 2 rows in set (0.00 sec) # 以下語句解析,實際發送到兩個dn進行查詢 mysql> explain select * from sbux_users; +-----------+------------------------------------+ | DATA_NODE | SQL | +-----------+------------------------------------+ | dn1 | SELECT * FROM sbux_users LIMIT 100 | | dn2 | SELECT * FROM sbux_users LIMIT 100 | +-----------+------------------------------------+ 2 rows in set (0.00 sec) # 根據分片鍵精準定位到dn1 mysql> explain select * from sbux_users where id =1; +-----------+-------------------------------------------------+ | DATA_NODE | SQL | +-----------+-------------------------------------------------+ | dn1 | SELECT * FROM sbux_users WHERE id = 1 LIMIT 100 | +-----------+-------------------------------------------------+ 1 row in set (0.00 sec) # 根據分片鍵精準定位到dn2 mysql> explain select * from sbux_users where id =501; +-----------+---------------------------------------------------+ | DATA_NODE | SQL | +-----------+---------------------------------------------------+ | dn2 | SELECT * FROM sbux_users WHERE id = 501 LIMIT 100 | +-----------+---------------------------------------------------+ 1 row in set (0.00 sec)