MySQL中間件之ProxySQL(15):ProxySQL代理MySQL組複製

返回ProxySQL系列文章:http://www.cnblogs.com/f-ck-need-u/p/7586194.htmlhtml

 

1.ProxySQL+組複製前言

在之前的ProxySQL版本中,要支持MySQL組複製(MGR,MySQL Group Replication)須要藉助第三方腳本對組複製作健康檢查並自動調整配置,可是從ProxySQL v1.4.0開始,已原生支持MySQL組複製的代理,在main庫中也已提供mysql_group_replication_hostgroups表來控制組複製集羣中的讀、寫組。node

Admin> show tables ;
+--------------------------------------------+
| tables                                     |
+--------------------------------------------+
| global_variables                           |
| mysql_collations                           |
| mysql_group_replication_hostgroups         |
| mysql_query_rules                          |
...
| runtime_mysql_group_replication_hostgroups |
...
| scheduler                                  |
+--------------------------------------------+

admin> show tables from monitor;
+------------------------------------+
| tables                             |
+------------------------------------+
| mysql_server_connect_log           |
| mysql_server_group_replication_log |
| mysql_server_ping_log              |
| mysql_server_read_only_log         |
| mysql_server_replication_lag_log   |
+------------------------------------+

儘管已原生支持MGR,但仍然須要在MGR節點中建立一張額外的系統視圖sys.gr_member_routing_candidate_status爲ProxySQL提供監控指標。建立該視圖的腳本addition_to_sys.zip我已上傳。在後文須要建立該系統視圖的地方,我會將這個腳本的內容貼出來。mysql

本文先解釋mysql_group_replication_hostgroups表中各字段的意義,而後按照實驗環境作好各類組的分配。最後根據實驗環境快速搭建單主模型的組複製環境,以及ProxySQL代理單主模型組複製的配置步驟。由於本文描述了ProxySQL代理單主、多主MGR的情形,因此搭建ProxySQL代理多主MGR也是沒有任何問題的。sql

本文實驗環境:shell

roles IP_address
proxysql 192.168.100.21
node1 192.168.100.22
node2 192.168.100.23
node3 192.168.100.24

1.1 mysql_group_replication_hostgroups表

該表的定義語句:bootstrap

Admin> show create table mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
       table: mysql_group_replication_hostgroups
Create Table: CREATE TABLE mysql_group_replication_hostgroups (
    writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
    backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
    reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
    offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
    writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1)) NOT NULL DEFAULT 0,
    max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
    comment VARCHAR,
    UNIQUE (reader_hostgroup),
    UNIQUE (offline_hostgroup),
    UNIQUE (backup_writer_hostgroup))

各字段的意義以下:後端

  • writer_hostgroup:默認的寫組。後端read_only=0的節點會自動分配到這個組中。
  • backup_writer_hostgroup:若是後端MySQL集羣有多個節點可寫並設置了max_writes字段的值,ProxySQL將會把其他的全部節點(超出max_writes)都放進備寫組backup_writer_hostgroup中做爲備份節點。
  • reader_hostgroup:負責讀的組。查詢規則或者只具備只讀權限的用戶的讀請求都會路由到該主機組中的節點。後端read_only=1的節點會自動分配到這個組中。
  • offline_hostgroup:當ProxySQL監控並決定了某節點爲OFFLINE後,會將其放進組offline_hostgroup中。
  • active:當啓用後,ProxySQL會監控該主機組,並在不一樣組之間合理地移動節點。
  • max_writers:該字段的值決定寫組writer_hostgroup中最大容許的節點數量,超出該數量的但容許寫的節點都會放進備份組backup_writer_hostgroup中。
  • writer_is_also_reader:決定一個節點升級爲寫節點(放進writer_hostgroup)後是否仍然保留在reader_hostgroup組中提供讀服務。
  • max_transactions_behind:當某節點延後於寫節點時,爲了防止讀取到過時數據,ProxySQL可能會自動避開該節點。該字段決定最多延後寫節點多少個事務(具體延後的事務數量能夠從MySQL的sys.gr_member_routing_candidate_status表中的transaction_behind字段獲取),延後的事務數量超出該值時,ProxySQL就會自動避開這個節點。
  • comment:該字段用於說明、註釋,可隨便定義。

須要注意的是,writer_hostgroup是主鍵字段,reader_hostgroup、offline_hostgroup、backup_writer_hostgroup具備惟一性,它們的值都是INT數值。app

因此,ProxySQL代理每個後端MGR集羣時,都必須爲這個MGR定義讀組、寫組、備寫組、離線組,且這四個組的值各不相同、不容許NULL、具備惟一性。此外,每一個username還有一個默認組,通常這個默認組會設置爲寫組,這樣在定義select規則時比較容易。socket

1.2 ProxySQL代理MGR時必須考慮的問題

ProxySQL代理MGR時有幾種狀況:ProxySQL代理的MGR運行模式是單主模型仍是多主模型,ProxySQL代理的是一個仍是多個MGR集羣。須要單獨考慮這些不一樣的狀況,它們影響ProxySQL如何分配組,甚至配置步驟不一樣。ide

1.2.1 ProxySQL代理單主模型的MGR

MRG以單主模型運行時,有兩個相關特性:

  • 1.非master節點會自動設置read_only=1
  • 2.master節點故障時,會自動選舉新的master節點,選舉時根據權重(group_replication_member_weigth,較老版本的組複製根據server_uuid按字典排序,值小的優先選爲master)決定誰是新的master

因此ProxySQL代理單主MGR時,ProxySQL中 要設置對後端節點read_only值的監控 。由於ProxySQL會根據read_only值自動調整讀、寫組中的節點,因此代理單主模型時很是方便。固然,若是不想讓ProxySQL來自動調整MGR節點所屬組,則無需設置read_only監控,見下文"ProxySQL代理單個MGR集羣"中的描述。

因爲只有一個寫節點,因此用不上備寫組,但仍然須要定義好它。例如:
寫組 -->hg=10
備寫組 -->hg=20
讀組 -->hg=30
離線組 -->hg=40

1.2.2 ProxySQL代理多主模型的MGR

多主模型的MGR,能夠同時有多個寫節點,而且容許少數節點出現故障。

仍然假設組的分配狀況:

寫組 -->hg=10
備寫組 -->hg=20
讀組 -->hg=30
離線組 -->hg=40

192.168.100.{22,23,24}分別命名爲node一、node二、node3節點。

假設max_writers=2,則node一、node二、node3其中2個節點(假設node一、node2)在寫組hg=10中,node3在備寫組hg=20中。此時必須設置writer_is_also_reader=1,不然沒有節點負責讀操做,因此hg=30中有node一、node二、node3共3個節點。假如node2節點故障,node3節點將從hg=20轉移到hg=10,讀組hg=30也只有node1和node3,node2會轉移到hg=40中,並ProxySQL不斷監控它是否上線。

因此,ProxySQL代理多主模型的MGR時,必須設置writer_is_also_reader=1

1.2.3 ProxySQL代理單個MGR集羣

ProxySQL代理單個MGR集羣時,若是不定製複雜的路由規則,徹底由ProxySQL來控制讀、寫組的節點分配,那麼在mysql_group_replication_hostgroups表中只能有一條記錄。

可是若是想要實現複雜的需求,例如想要將開銷大的select語句路由到某個固定的slave上或自定義的某個hostgroup中,就不能再讓ProxySQL來管理MGR,這時不能在mysql_group_replication_hostgroups中插入和該MGR集羣有關的記錄(若是能夠,也不要去監控read_only值),而是在mysql_servers中定義好目標組。這種狀況下,ProxySQL不關心後端是MGR仍是普通的MySQL實例。

1.2.4 ProxySQL代理多個MGR集羣

很不幸,ProxySQL的mysql_group_replication_hostgroups表對多MGR集羣並不友好。由於ProxySQL經過監控read_only值來自動調整節點所屬組。若是ProxySQL代理兩個MGR集羣X、Y,在mysql_group_replication_hostgroups添加一條記錄後,MGR集羣X、Y中的節點都會加入到這條記錄所定義的組中,因而兩個MGR集羣就混亂了。添加多條記錄也無濟於事,由於這個表中並無識別集羣的方法。其實mysql_replication_hostgroups也同樣存在這樣的問題。

這時只能在mysql_servers中對不一樣MGR集羣中的各個節點定義好所屬組,而後在規則中指定路由目標。也就是說,用不上mysql_group_replication_hostgroups表,也無需去監控read_only值。

1.3 配置組複製

本文配置的單主模型的組複製。

1.設置主機名和DNS解析(必須保證各mysql實例的主機名不一致,且能經過主機名找到各成員)

# node1上:
hostnamectl set-hostname --static node1.longshuai.com
hostnamectl -H root@192.168.100.23 set-hostname node2.longshuai.com
hostnamectl -H root@192.168.100.24 set-hostname node3.longshuai.com

# 寫/etc/hosts
# node1上:
cat >>/etc/hosts<<eof
    192.168.100.22 node1.longshuai.com
    192.168.100.23 node2.longshuai.com
    192.168.100.24 node3.longshuai.com
eof
scp /etc/hosts 192.168.100.23:/etc
scp /etc/hosts 192.168.100.24:/etc

2.提供node一、node二、node3的配置文件

node1的/etc/my.cnf內容:

[mysqld]
datadir=/data
socket=/data/mysql.sock

server-id=100                      # 各節點不一致
gtid_mode=on                       
enforce_gtid_consistency=on        
log-bin=/data/master-bin           
binlog_format=row                  
binlog_checksum=none               
master_info_repository=TABLE       
relay_log_info_repository=TABLE    
relay_log=/data/relay-log          
log_slave_updates=ON               
sync-binlog=1                      
log-error=/data/error.log
pid-file=/data/mysqld.pid

transaction_write_set_extraction=XXHASH64         
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"  
loose-group_replication_start_on_boot=off 
loose-group_replication_member_weigth = 40     # 建議各節點設置不一樣值
loose-group_replication_local_address="192.168.100.22:20002"  # 各節點不一致
loose-group_replication_group_seeds="192.168.100.22:20002,192.168.100.23:20003,,192.168.100.24:20004"

node2和node3的配置文件只需修改幾項須要不一致的值便可:

如下爲node2的/etc/my.cnf的部份內容

server-id=110
loose-group_replication_member_weigth = 30
loose-group_replication_local_address="192.168.100.23:20003"

如下爲node3的/etc/my.cnf的部份內容

server-id=120
loose-group_replication_member_weigth = 20
loose-group_replication_local_address="192.168.100.24:20004"

3.啓動node1,引導組複製

先啓動node1的MySQL服務。

systemctl start mysqld

連上node1節點,建立用於複製的用戶。我這裏建立的用戶爲repl,密碼爲P@ssword1!。

create user repl@'192.168.100.%' identified by 'P@ssword1!';
grant replication slave on *.* to repl@'192.168.100.%';

在node1上配置恢復通道。

change master to 
            master_user='repl',
            master_password='P@ssword1!'
            for channel 'group_replication_recovery';

安裝組複製插件。

install plugin group_replication soname 'group_replication.so';

引導、啓動組複製功能。

set @@global.group_replication_bootstrap_group=on;
start group_replication;
set @@global.group_replication_bootstrap_group=off;

查看node1是否ONLINE。

select * from performance_schema.replication_group_members\G

4.添加node二、node3到複製組中

先啓動node2和node3的mysql服務:

systemctl start mysqld

再在node2和node3上指定恢復通道,從Donor處恢復數據。

change master to 
            master_user='repl',
            master_password='P@ssword1!'
            for channel 'group_replication_recovery';

最後,在node2和node3上安裝組複製插件,並啓動組複製功能便可。

install plugin group_replication soname 'group_replication.so';
start group_replication;

在任意一個節點上查看node一、node二、node3是否都是ONLINE。

select * from performance_schema.replication_group_members\G

*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: a5165443-6aec-11e8-a8f6-000c29827955
 MEMBER_HOST: node1.longshuai.com
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: ba505889-6aec-11e8-a864-000c29b0bec4
 MEMBER_HOST: node2.longshuai.com
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: bf12fe97-6aec-11e8-a909-000c29e55287
 MEMBER_HOST: node3.longshuai.com
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

至此,node一、node二、node3組成的3節點單主模型的組複製配置完成。下面配置ProxySQL。

1.4 配置ProxySQL

根據前文的分析,ProxySQL代理單主模型組複製時,若是想讓ProxySQL來自動調整節點所屬讀、寫組,須要開啓read_only監控,並在mysql_group_replication_hostgroups表中插入一條記錄。

假設4種組的hostgroup_id爲:
寫組 -->hg=10
備寫組 -->hg=20
讀組 -->hg=30
離線組 -->hg=40

安裝ProxySQL的過程略。如下是配置ProxySQL的過程。

1.連上ProxySQL的Admin管理接口

mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt 'Admin> '

2.向 mysql_servers 表中添加後端節點node一、node2和node3

delete from mysql_servers;

insert into mysql_servers(hostgroup_id,hostname,port) 
values(10,'192.168.100.22',3306),
      (10,'192.168.100.23',3306),
      (10,'192.168.100.24',3306);

load mysql servers to runtime;
save mysql servers to disk;

查看3個節點是否都是ONLINE

admin> select hostgroup_id,hostname,port,status,weight from mysql_servers;
+--------------+----------------+------+--------+--------+
| hostgroup_id | hostname       | port | status | weight |
+--------------+----------------+------+--------+--------+
| 10           | 192.168.100.22 | 3306 | ONLINE | 1      |
| 10           | 192.168.100.23 | 3306 | ONLINE | 1      |
| 10           | 192.168.100.24 | 3306 | ONLINE | 1      |
+--------------+----------------+------+--------+--------+

3.監控後端節點

首先,在node1上建立ProxySQL用於監控的用戶。注意,這裏監控用戶的權限和ProxySQL代理普通mysql實例不同,ProxySQL代理組複製時,是從MGR的系統視圖sys.gr_member_routing_candidate_status中獲取監控指標,因此授予監控用戶對該視圖的查詢權限,由於無需從show slave status中獲取Seconds_Behind_Master,因此無需replication client權限。

# 在node1上執行:
mysql> create user monitor@'192.168.100.%' identified by 'P@ssword1!';
mysql> grant select on sys.* to monitor@'192.168.100.%';

而後回到ProxySQL上配置監控。

set mysql-monitor_username='monitor';
set mysql-monitor_password='P@ssword1!';

load mysql variables to runtime;
save mysql variables to disk;

4.建立系統視圖sys.gr_member_routing_candidate_status

在node1節點上,建立系統視圖sys.gr_member_routing_candidate_status,該視圖將爲ProxySQL提供組複製相關的監控狀態指標。

若是前面下載了addition_to_sys.sql腳本,執行以下語句導入MySQL便可。

mysql -uroot -pP@ssword1! < addition_to_sys.sql

也能夠執行以下語句建立該系統視圖:

USE sys;

DELIMITER $$

CREATE FUNCTION IFZERO(a INT, b INT)
RETURNS INT
DETERMINISTIC
RETURN IF(a = 0, b, a)$$

CREATE FUNCTION LOCATE2(needle TEXT(10000), haystack TEXT(10000), offset INT)
RETURNS INT
DETERMINISTIC
RETURN IFZERO(LOCATE(needle, haystack, offset), LENGTH(haystack) + 1)$$

CREATE FUNCTION GTID_NORMALIZE(g TEXT(10000))
RETURNS TEXT(10000)
DETERMINISTIC
RETURN GTID_SUBTRACT(g, '')$$

CREATE FUNCTION GTID_COUNT(gtid_set TEXT(10000))
RETURNS INT
DETERMINISTIC
BEGIN
  DECLARE result BIGINT DEFAULT 0;
  DECLARE colon_pos INT;
  DECLARE next_dash_pos INT;
  DECLARE next_colon_pos INT;
  DECLARE next_comma_pos INT;
  SET gtid_set = GTID_NORMALIZE(gtid_set);
  SET colon_pos = LOCATE2(':', gtid_set, 1);
  WHILE colon_pos != LENGTH(gtid_set) + 1 DO
     SET next_dash_pos = LOCATE2('-', gtid_set, colon_pos + 1);
     SET next_colon_pos = LOCATE2(':', gtid_set, colon_pos + 1);
     SET next_comma_pos = LOCATE2(',', gtid_set, colon_pos + 1);
     IF next_dash_pos < next_colon_pos AND next_dash_pos < next_comma_pos THEN
       SET result = result +
         SUBSTR(gtid_set, next_dash_pos + 1,
                LEAST(next_colon_pos, next_comma_pos) - (next_dash_pos + 1)) -
         SUBSTR(gtid_set, colon_pos + 1, next_dash_pos - (colon_pos + 1)) + 1;
     ELSE
       SET result = result + 1;
     END IF;
     SET colon_pos = next_colon_pos;
  END WHILE;
  RETURN result;
END$$

CREATE FUNCTION gr_applier_queue_length()
RETURNS INT
DETERMINISTIC
BEGIN
  RETURN (SELECT sys.gtid_count( GTID_SUBTRACT( (SELECT
Received_transaction_set FROM performance_schema.replication_connection_status
WHERE Channel_name = 'group_replication_applier' ), (SELECT
@@global.GTID_EXECUTED) )));
END$$

CREATE FUNCTION gr_member_in_primary_partition()
RETURNS VARCHAR(3)
DETERMINISTIC
BEGIN
  RETURN (SELECT IF( MEMBER_STATE='ONLINE' AND ((SELECT COUNT(*) FROM
performance_schema.replication_group_members WHERE MEMBER_STATE != 'ONLINE') >=
((SELECT COUNT(*) FROM performance_schema.replication_group_members)/2) = 0),
'YES', 'NO' ) FROM performance_schema.replication_group_members JOIN
performance_schema.replication_group_member_stats USING(member_id));
END$$

CREATE VIEW gr_member_routing_candidate_status AS SELECT
sys.gr_member_in_primary_partition() as viable_candidate,
IF( (SELECT (SELECT GROUP_CONCAT(variable_value) FROM
performance_schema.global_variables WHERE variable_name IN ('read_only',
'super_read_only')) != 'OFF,OFF'), 'YES', 'NO') as read_only,
sys.gr_applier_queue_length() as transactions_behind, Count_Transactions_in_queue as 'transactions_to_cert' from performance_schema.replication_group_member_stats;$$

DELIMITER ;

視圖建立後,能夠查看該視圖:

node1上:

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | NO        |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+

node2上:

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+

5.向mysql_group_replication_hostgroups中插入記錄

delete from mysql_group_replication_hostgroups;

insert into mysql_group_replication_hostgroups(writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) 
values(10,20,30,40,1,1,0,0);

load mysql servers to runtime;
save mysql servers to disk;

上述配置中,我把writer_is_also_reader設置爲false,讓master只負責寫操做。

admin> select * from mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
       writer_hostgroup: 10
backup_writer_hostgroup: 20
       reader_hostgroup: 30
      offline_hostgroup: 40
                 active: 1
            max_writers: 1
  writer_is_also_reader: 0
max_transactions_behind: 0
                comment: NULL

再看看節點的分組調整狀況:

admin> select hostgroup_id, hostname, port,status from runtime_mysql_servers;
+--------------+----------------+------+--------+
| hostgroup_id | hostname       | port | status |
+--------------+----------------+------+--------+
| 10           | 192.168.100.22 | 3306 | ONLINE |
| 30           | 192.168.100.24 | 3306 | ONLINE |
| 30           | 192.168.100.23 | 3306 | ONLINE |
+--------------+----------------+------+--------+

查看對MGR的監控指標。

Admin> select  hostname,
               port,
               viable_candidate,
               read_only,
               transactions_behind,
               error 
       from mysql_server_group_replication_log 
       order by time_start_us desc 
       limit 6;
+----------------+------+------------------+-----------+---------------------+-------+
| hostname       | port | viable_candidate | read_only | transactions_behind | error |
+----------------+------+------------------+-----------+---------------------+-------+
| 192.168.100.24 | 3306 | YES              | YES       | 0                   | NULL  |
| 192.168.100.23 | 3306 | YES              | YES       | 0                   | NULL  |
| 192.168.100.22 | 3306 | YES              | NO        | 0                   | NULL  |
| 192.168.100.24 | 3306 | YES              | YES       | 0                   | NULL  |
| 192.168.100.23 | 3306 | YES              | YES       | 0                   | NULL  |
| 192.168.100.22 | 3306 | YES              | NO        | 0                   | NULL  |
+----------------+------+------------------+-----------+---------------------+-------+

6.配置mysql_users

在node1節點上執行:

grant all on *.* to root@'192.168.100.%' identified by 'P@ssword1!';

回到ProxySQL,向mysql_users表插入記錄。

delete from mysql_users;

insert into mysql_users(username,password,default_hostgroup,transaction_persistent) 
values('root','P@ssword1!',10,1);

load mysql users to runtime;
save mysql users to disk;

7.配置測試用的讀寫分離規則

delete from mysql_query_rules;

insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply)
VALUES (1,1,'^SELECT.*FOR UPDATE$',10,1),
       (2,1,'^SELECT',30,1);

load mysql query rules to runtime;
save mysql query rules to disk;

測試是否按預期進行讀寫分離。

mysql -uroot -pP@ssword1! -h192.168.100.21 -P6033 -e 'create database gr_test;'
mysql -uroot -pP@ssword1! -h192.168.100.21 -P6033 -e 'select user,host from mysql.user;' 
mysql -uroot -pP@ssword1! -h192.168.100.21 -P6033 -e 'show databases;'

查看語句路由狀態:

admin> select hostgroup,digest_text from stats_mysql_query_digest;  
+-----------+----------------------------------+
| hostgroup | digest_text                      |
+-----------+----------------------------------+
| 10        | show databases                   |
| 30        | select user,host from mysql.user |
| 10        | create database gr_test          |
| 10        | select @@version_comment limit ? |
+-----------+----------------------------------+

select語句路由到讀組hg=30上,show操做按照默認主機組路由到hg=10,create操做路由到hg=10這個寫組。

8.測試MGR故障轉移

將MGR的某個節點停掉,例如直接關閉當前master節點node1的mysql服務。

在node1上執行:

systemctl stop mysqld

而後,看看ProxySQL上的節點狀態。

admin> select hostgroup_id, hostname, port,status from runtime_mysql_servers;
+--------------+----------------+------+---------+
| hostgroup_id | hostname       | port | status  |
+--------------+----------------+------+---------+
| 10           | 192.168.100.23 | 3306 | ONLINE  |
| 40           | 192.168.100.22 | 3306 | SHUNNED |
| 30           | 192.168.100.24 | 3306 | ONLINE  |
+--------------+----------------+------+---------+

結果顯示node1的狀態爲SHUNNED,表示該節點被ProxySQL避開了。且node2節點移到了hg=10的組中,說明該節點被選舉爲了新的Master節點。

再將node1加回組中。在node1上執行:

shell> systemctl start mysqld

mysql> start group_replication;

而後,看看ProxySQL上的節點狀態。

admin> select hostgroup_id, hostname, port,status from runtime_mysql_servers;
+--------------+----------------+------+--------+
| hostgroup_id | hostname       | port | status |
+--------------+----------------+------+--------+
| 10           | 192.168.100.23 | 3306 | ONLINE |
| 30           | 192.168.100.22 | 3306 | ONLINE |
| 30           | 192.168.100.24 | 3306 | ONLINE |
+--------------+----------------+------+--------+

可見,node1已經從新ONLINE。

相關文章
相關標籤/搜索