ProxySQL在早期版本若須要作高可用,須要搭建兩個實例,進行冗餘。但兩個ProxySQL實例之間的數據並不能共通,在主實例上配置後,仍須要在備用節點上進行配置,對管理來講很是不方便。可是ProxySQl 從1.4.2版本後,ProxySQL支持原生的Cluster集羣搭建,實例之間能夠互通一些配置數據,大大簡化了管理與維護操做。html
ProxySQL是一個非中心化代理,在拓撲中,建議將它部署在靠近應用程序服務器的位置處。ProxySQL節點能夠很方便地擴展到上百個節點,由於它支持runtime修改配置並當即生效。這意味着能夠經過一些配置管理工具來協調、重配置ProxySQL集羣中的各實例,例如 Ansible/Chef/Puppet/Salt 等自動化工具,或者Etcd/Consul/ZooKeeper等服務發現軟件。這些特性使得能夠高度定製ProxySQL的集羣。但儘管如此,這些管理方式仍是有些缺點:
- 須要且依賴於外部軟件(配置管理軟件);
- 正由於上面的一點,使得這種管理方式不是原生支持的;
- 管理時間長度不可預測;
- 沒法解決網絡分區帶來的問題;node
基於此,ProxySQL 1.4.x 版本嘗試支持原生集羣功能。集羣的搭建有不少種方式,如1+1+1的方式,還能夠(1+1)+1的方式。採用(1+1)+1的集羣部署方式比較簡單,即先將兩個節點做爲集羣啓動,而後其餘節點選擇性加入的方式。mysql
1、ProxySQL Cluster 配置說明linux
ProxySQL有兩個主要的組件實現ProxySQL集羣:
- monitoring (集羣監控組件)
- re-configuration (remote configuration,遠程配置)git
這兩個組件中都有4張表:
- mysql_query_rules
- mysql_servers
- mysql_users
- proxysql_serversgithub
一、Monitoring
爲了監控Cluster,ProxySQL引入了幾個新的表、命令、變量。sql
Admin variables
有幾個和Cluster相關的變量,它們是Admin變量,意味着修改它們須要使用load admin variables to runtime使其生效。
1) 用於同步的變量
- admin-checksum_mysql_query_rules
- admin-checksum_mysql_servers
- admin-checksum_mysql_users: 若是有數百萬的users,則建議禁用該特性且不要依賴於它,由於它會很是慢。數據庫
這幾個變量都是布爾值,其中:
admin-checksum_mysql_query_rules 設置爲true時,每次執行"LOAD MYSQL QUERY RULES TO RUNTIME"時,proxysql都會生成一個新的配置checksum校驗碼(configuration checksum)。若是爲false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;bootstrap
admin-checksum_mysql_servers 設置爲true時,每次執行"LOAD MYSQL SERVERS TO RUNTIME"時,proxysql都會生成一個新的配置checksum。若是爲false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;vim
admin-checksum_mysql_users 設置爲true時,每次執行"LOAD MYSQL USERS TO RUNTIME"時,proxysql都會生成一個新的配置checksum。若是爲false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;
2) 集羣認證憑據相關的變量
- admin-cluster_username 和 admin-cluster_password:該憑據用於監控其它ProxySQL實例。須要注意,這個用戶/密碼必須是admin-admin_credentials中已經存在的,不然將會鏈接失敗。若是沒有定義集羣憑據,ProxySQL集羣將不會作任何檢查。
ProxySQL Cluster集羣間,ProxySQL 爲了監控其餘ProxySQL 實例須要認證參數:admin-cluster_username 和 admin-cluster_password。並且這2個參數指定的用戶名/密碼還必須配置到參數 admin-admin_credentials 中,不然會沒法鏈接到其餘ProxySQL
admin_credentials="admin:admin;cluster1:secret1pass"
3) 檢查時間間隔/頻率相關變量
- admin-cluster_check_interval_ms:定義校驗碼檢查(checksum check)時間間隔。默認值1000(即1秒),最小值10,最大值300000。
- admin-cluster_check_status_frequency:該變量定義作了多少次checksum檢查後,就執行一次狀態檢查(status check)。默認值10,最小0,最大10000。
4) 同步到磁盤相關的變量
在遠程同步配置以後,一般最好的作法是當即將新的更改保存到磁盤。這樣重啓時,更改的配置不會丟失。
- admin-cluster_mysql_query_rules_save_to_disk
- admin-cluster_mysql_servers_save_to_disk
- admin-cluster_mysql_users_save_to_disk
- admin-cluster_proxysql_servers_save_to_disk
這幾個變量都是布爾值。當設置爲true(默認值)時,在遠程同步並load到runtime後,新的mysql_query_rules、mysql_servers、mysql_users、proxysql_servers配置會持久化到磁盤中。
5) 是否要遠程同步的變量
因爲某些緣由,可能多個ProxySQL實例會在同一時間進行從新配置。
例如,每一個ProxySQL實例都在監控MySQL的replication,且自動探測到MySQL的故障轉移,在一個極短的時間內(可能小於1秒),這些ProxySQL實例可能會自動調整新的配置,而無需經過其它ProxySQL實例來同步新配置。
相似的還有,當全部ProxySQL實例都探測到了和某實例的臨時的網絡問題,或者某個MySQL節點比較慢(replication lag, 拖後腿),這些ProxySQL實例都會自動地避開這些節點。這時各ProxySQL實例也無需從其它節點處同步配置,而是同時自動完成新的配置。
基於此,能夠配置ProxySQL集羣,讓各ProxySQL實例暫時無需從其它實例處同步某些配置,而是等待必定次數的檢查以後,再觸發遠程同步。可是,若是本地和遠程節點的這些變量閾值不一樣,則仍是會觸發遠程同步。
- admin-cluster_mysql_query_rules_diffs_before_sync:
- admin-cluster_mysql_servers_diffs_before_sync:
- admin-cluster_mysql_users_diffs_before_sync:
- admin-cluster_proxysql_servers_diffs_before_sync:
分別定義通過多少次的"沒法匹配"檢查以後,觸發mysql_query_rules、mysql_servers、mysql_users、proxysql_servers配置的遠程同步。默認值3次,最小值0,表示永不遠程同步,最大值1000。
好比各實例監控mysql_servers配置並作校驗碼檢查,若是某實例和本地配置不一樣,當屢次檢測到都不一樣時,將根據load to runtime的時間戳決定是否要從遠端將mysql_servers同步到本地。
6) 延遲同步
ProxySQL Cluster 能夠定義達到多少個checksum 不一樣以後,纔在集羣內部進行配置同步。
query rules, servers, users 和proxysql servers 分別有admin-cluster_XXX_diffs_before_sync 相關的參數,取值範圍0 ~ 1000,0 表明從不一樣步。默認3。
Configuration tables
1) proxysql_servers 表
proxysql_servers表定義了ProxySQL集羣中各ProxySQL實例列表。ProxySQL 集羣有哪些實例,能夠查看proxysql_servers 表。在新增ProxySQL 實例時,也須要 insert 該表,或者修改cnf 文件中的 proxysql_servers 部分的配置。該表的定義語句以下:
CREATE TABLE proxysql_servers ( hostname VARCHAR NOT NULL, port INT NOT NULL DEFAULT 6032, weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0, comment VARCHAR NOT NULL DEFAULT '', PRIMARY KEY (hostname, port) )
各字段的意義以下:
- hostname:ProxySQL實例的主機名或IP地址;
- port:ProxySQL實例的端口 (譯註:這個端口是ProxySQL示例的admin管理端口);
- weight:目前未啓用該功能。定義集羣中各ProxySQL的權重值;
- comment:註釋字段,可隨意填寫;
proxysql_servers的配置項能夠從傳統配置文件中加載,即支持配置文件。如下是傳統配置文件中定義proxysql_servers的示例:
proxysql_servers = ( { hostname="172.16.0.101" port=6032 weight=0 comment="proxysql1" }, { hostname="172.16.0.102" port=6032 weight=0 comment="proxysql2" } )
特別注意:ProxySQL只有在磁盤數據庫文件不存在,或者使用了--initial選項時纔會讀取傳統配置文件。
- 配置文件暫時還不支持該表。
- 由於該ProxySQL Cluster功能仍處於試驗階段,不會自動從磁盤配置文件中讀取到該表中。也就是說,目前階段,不支持在配置文件中配置proxysql server表的內容!
2) runtime_proxysql_servers 表
正如其它runtime_表同樣,runtime_proxysql_servers表和proxysql_servers的結構徹底一致,只不過它是runtime數據結構中的配置,也就是當前正在生效的配置。該表的定義語句以下:
CREATE TABLE runtime_proxysql_servers ( hostname VARCHAR NOT NULL, port INT NOT NULL DEFAULT 6032, weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0, comment VARCHAR NOT NULL DEFAULT '', PRIMARY KEY (hostname, port) )
3)runtime_checksums_values 表
runtime_checksums_values表是目前第一個不基於內存數據庫中的runtime_表(譯註:換句話說,沒有checksums_values表)。該表的定義語句以下:
CREATE TABLE runtime_checksums_values ( name VARCHAR NOT NULL, version INT NOT NULL, epoch INT NOT NULL, checksum VARCHAR NOT NULL, PRIMARY KEY (name))
該表用於顯示在執行load to runtime命令時的一些信息:
- name:模塊的名稱
- version:執行了多少次load to runtime操做,包括全部隱式和顯式執行的(某些事件會致使ProxySQL內部自動執行load to runtime命令)
- epoch:最近一次執行load to runtime的時間戳
- checksum:執行load to runtime時生成的配置校驗碼(checksum)
該表的一個實例
Admin> SELECT * FROM runtime_checksums_values; +-------------------+---------+------------+--------------------+ | name | version | epoch | checksum | +-------------------+---------+------------+--------------------+ | admin_variables | 0 | 0 | | | mysql_query_rules | 5 | 1503442167 | 0xD3BD702F8E759B1E | | mysql_servers | 1 | 1503440533 | 0x6F8CEF0F4BD6456E | | mysql_users | 1 | 1503440533 | 0xF8BDF26C65A70AC5 | | mysql_variables | 0 | 0 | | | proxysql_servers | 2 | 1503442214 | 0x89768E27E4931C87 | +-------------------+---------+------------+--------------------+ 6 rows in set (0,00 sec)
特別注意: 目前6個組件中只有4種模塊的配置會生成對應的校驗碼(checksum),不能生成的組件是:admin_variables,mysql_variables。 checnsum 只有在執行了load to run ,而且admin-checksum_XXX = true 時,才能夠正常生成。即:
- LOAD MYSQL QUERY RULES TO RUNTIME:當admin-checksum_mysql_query_rules=true時生成一個新的mysql_query_rules配置校驗碼
- LOAD MYSQL SERVERS TO RUNTIME:當admin-checksum_mysql_servers=true時生成一個新的mysql_servers配置校驗碼
- LOAD MYSQL USERS TO RUNTIME:當admin-checksum_mysql_users=true時生成一個新的mysql_users配置校驗碼
- LOAD PROXYSQL SERVERS TO RUNTIME:老是會生成一個新的proxysql_servers配置校驗碼
- LOAD ADMIN VARIABLES TO RUNTIME:不生成校驗碼
- LOAD MYSQL VARIABLES TO RUNTIME:不生產校驗碼
New commands (新命令):
- LOAD PROXYSQL SERVERS FROM MEMORY / LOAD PROXYSQL SERVERS TO RUNTIME
從內存數據庫中加載proxysql servers配置到runtime數據結構
- SAVE PROXYSQL SERVERS TO MEMORY / SAVE PROXYSQL SERVERS FROM RUNTIME
將proxysql servers配置從runtime數據結構持久化保存到內存數據庫中
- LOAD PROXYSQL SERVERS TO MEMORY / LOAD PROXYSQL SERVERS FROM DISK
從磁盤數據庫中加載proxysql servers配置到內存數據庫中
- LOAD PROXYSQL SERVERS FROM CONFIG
從傳統配置文件中加載proxysql servers配置到內存數據庫中
- SAVE PROXYSQL SERVERS FROM MEMORY / SAVE PROXYSQL SERVERS TO DISK
將proxysql servers配置從內存數據庫中持久化保存到磁盤數據庫中
state tables
有三個表被加入到stat 表中。
1)stats_proxysql_servers_checksums 表
該表記錄集羣中各個實例的組件checksum 信息。
Admin> SHOW CREATE TABLE stats.stats_proxysql_servers_checksums\G *************************** 1. row *************************** table: stats_proxysql_servers_checksums Create Table: CREATE TABLE stats_proxysql_servers_checksums ( hostname VARCHAR NOT NULL, port INT NOT NULL DEFAULT 6032, name VARCHAR NOT NULL, version INT NOT NULL, epoch INT NOT NULL, checksum VARCHAR NOT NULL, changed_at INT NOT NULL, updated_at INT NOT NULL, diff_check INT NOT NULL, PRIMARY KEY (hostname, port, name) )
各字段意義以下:
- hostname:ProxySQL實例的主機名
- port:ProxySQL實例的端口
- name:對端runtime_checksums_values中報告的模塊名稱
- version:對端runtime_checksum_values中報告的checksum的版本
注意,ProxySQL實例剛啓動時version=1:ProxySQL實例將永遠不會從version=1的實例處同步配置數據,由於一個剛剛啓動的ProxyQL實例不太多是真相的來源,這能夠防止新的鏈接節點破壞當前集羣配置
- epoch:對端runtime_checksums_values中報告的checksum的時間戳epoch值
- checksum:對端runtime_checksums_values中報告的checksum值
- changed_at:探測到checksum發生變化的時間戳
- updated_at:最近一次更新該類配置的時間戳
- diff_check:一個計數器,用於記錄探測到的對端和本地checksum值已有多少次不一樣
須要等待達到閾值後,纔會觸發從新配置。前面已經說明,在多個ProxySQL實例同時或極短期內同時更改配置時,可讓ProxySQL等待屢次探測以後再決定是否從遠端同步配置。這個字段正是用於記錄探測到的配置不一樣次數。若是diff_checks不斷增長卻仍未觸發同步操做,這意味着對端不是可信任的同步源,例如對端的version=1。另外一方面,若是某對端節點不和ProxySQL集羣中的其它實例進行配置同步,這意味着集羣沒有可信任的同步源。這種狀況多是由於集羣中全部實例啓動時的配置都不同,它們沒法自動判斷哪一個配置纔是正確的。能夠在某個節點上執行load to runtime,使該節點被選舉爲該類配置的可信任同步源。
2)stats_proxysql_servers_metrics 表
該表用來顯示羣集模塊在各個實例中執行 SHOW MYSQL STATUS 時,當前系統的部分指標。目前該表只是用來debug 的,在將來該表的各個指標將用來反映各個實例的健康狀態。
Admin> SHOW CREATE TABLE stats.stats_proxysql_servers_metrics\G *************************** 1. row *************************** table: stats_proxysql_servers_metrics Create Table: CREATE TABLE stats_proxysql_servers_metrics ( hostname VARCHAR NOT NULL, port INT NOT NULL DEFAULT 6032, weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0, comment VARCHAR NOT NULL DEFAULT '', response_time_ms INT NOT NULL, Uptime_s INT NOT NULL, last_check_ms INT NOT NULL, Queries INT NOT NULL, Client_Connections_connected INT NOT NULL, Client_Connections_created INT NOT NULL, PRIMARY KEY (hostname, port) )
當執行show mysql status語句時,顯示一些已檢索到的指標。字段意義以下:
- hostname:ProxySQL實例主機名
- port:ProxySQL實例端口
- weight:報告結果同proxysql_servers.weight字段
- comment:報告結果同proxysql_servers.comment字段
- response_time_ms:執行show mysql status的響應時長,單位毫秒
- Uptime_s:ProxySQL實例的uptime,單位秒
- last_check_ms:最近一次執行check距如今已多久,單位毫秒
- Queries:該實例已執行query的數量
- Client_Connections_connected:number of client's connections connected
- Client_Connections_created:number of client's connections created
注意:當前這些狀態只爲debug目的,但將來可能會做爲遠程實例的健康指標。
3)stats_proxysql_servers_status
在目前的1.4.6 版本中,該表還未啓用。
二、Re-configuration
由於集羣間,全部節點都是相互監控的。每一個ProxySQL節點都監控集羣中的其它實例,它們能夠快速探測到某個實例的配置是否發生改變。若是某實例的配置發生改變,其它實例會檢查這個配置和自身的配置是否相同,由於其它節點的配置可能和本節點的配置同時(或在極短期差範圍)發生了改變。
因爲相互監控,因此當配置發生變更時,它們能夠當即發現。當其餘節點的配置發生變更時,本節點會先去檢查一次它自身的配置,由於有可能remote instance 和local instance 同時發生配置變更。
若是比較結果不一樣:
- 若是它們自身的 version = 1,就去找集羣內從version > 1的節點處找出epoch最大值的節點,並從該節點拉取配置應用到本地,並當即同步。
- 若是version >1, 該節點開始統計和其餘節點間的differ 數。即開始對探測到的不一樣配置進行計數。
- 當 differ 大於 cluster__name___diffs_before_sync , 而且cluster__name__diffs_before_sync > 0, 就去找集羣內 version >1, 而且epoch 最高的節點,並當即同步。也就是說當探測到不一樣配置的次數超過cluster_name_diffs_before_sync,且cluster_name_diffs_before_sync大於0時,找出version > 1且epoch值最大的節點,並從該節點拉取配置禁用應用。
同步配置的過程以下:
- 用於健康檢查的鏈接,也用來執行一系列相似於select _list_of_columns from runtime_module的select語句。例如:
SELECT hostgroup_id, hostname, port, status, weight, compression, max_connections, max_replication_lag, use_ssl, max_latency_ms, comment FROM runtime_mysql_servers; SELECT writer_hostgroup, reader_hostgroup, comment FROM runtime_mysql_replication_hostgroups;
刪除本地配置。例如:
DELETE FROM mysql_servers; DELETE FROM mysql_replication_hostgroups;
- 向本地配置表中插入已從遠端節點檢索到的新配置。
- 在內部執行LOAD module_name TO RUNTIME:這會遞增版本號,並建立一個新的checksum。
- 若是cluster_name_save_to_disk=true,再在內部執行SAVE module_name TO DISK。
三、 網絡消耗
在上述描述的架構模式中,每一個ProxySQL節點都監控集羣中其它全部節點,這是一個很典型並完整的點對點網絡。
爲了減小網絡開銷,節點間並不老是交換全部的checksum 信息,而是將全部version 、全部checksum (注意:是每一個節點都有一個全局checksum,而不是全部節點共有一個全局checksum)。相結合產生的單個新的 checksum 進行交換。因此一旦這個新的checksum 發生變更,當全局checksum改變,將檢索該全局checksum對應的checksum列表,那麼獲得詳細的各個模塊的checksum。 經過該技術,200個節點的ProxySQL集羣中,若是每一個節點的監控時間間隔爲1000ms,每一個節點的進/出流量只需50KB的帶寬
1) ProxySQL 目前實現的功能
- 支持MySQL組複製(add support for MySQL Group Replication)
- 支持Scheduler(add support for Scheduler)
2) ProxySQL 將來可能要實現的功能
將來可能要實現的Cluster不完整特性列表。這些特性目前都還未實現,且實現後有可能會與此處描述的有所區別。
- 支持master選舉:ProxySQL內部將使用master關鍵字替代leader
- 只有master節點是可寫/可配置的
- 實現相似於MySQL複製的功能:從master複製到slave。這將容許實時推送配置內容,而非如今使用的主動pull機制
- 實現相似於MySQL複製的功能:從master複製到候選master
- 實現相似於MySQL複製的功能:從候選master複製到slave
- 將候選master定義爲法定票數節點,slave不參與投票
3) 問題:若是不一樣節點在同一時刻加載了不一樣配置會如何,最後一個才生效嗎?
目前還未實現master和master選舉的機制。這意味着多個節點上可能會潛在地同時執行load命令(就像是多個master同樣),每一個實例都會基於時間戳來檢測配置衝突,而後再觸發自動從新配置。若是全部節點在同一時刻加載的是相同的配置,則不會出現問題;若是全部節點在不一樣時刻加載了不一樣的配置,則最後一個配置生效。若是全部節點在同一時刻加載了不一樣配置,這些不一樣配置會正常進行傳播。直到出現衝突,而後回滾。慶幸的是,每一個ProxySQL節點都知道其它每一個節點的checksum,所以很容易監控並探測到不一樣的配置。
4)誰負責向全部節點寫入配置?
目前,ProxySQL集羣使用拉取(pull)機制,所以當探測到節點自身須要從新配置時,會從擁有最新配置的節點處拉取配置到本地並應用。
5)何實現選舉?Raft協議嗎?
關於選舉,正在實現計劃中,但應該不會採用Raft共識協議。ProxySQL使用表來存儲配置信息,使用MySQL協議來執行對端健康檢查、配置信息的查詢請求,以及使用MySQL協議實現心跳等等。因此對於ProxySQL來講,MySQL協議自己相比Raft協議要更多才多藝。
6)某些緣由下,若是某個節點沒法從遠端抓取新的配置會發生什麼?
配置更改是異步傳播的。所以,某個ProxySQL節點可能暫時會沒法獲取新的配置,例如網絡問題。可是,當該實例探測到新的配置時,它會自動去抓取新配置。
7)跨DC的ProxySQL集羣是否實現?最佳實踐是怎樣的,每一個DC一個ProxySQL集羣嗎?
ProxySQL集羣沒有邊界限制,所以一個ProxySQL集羣能夠跨多個DC,一個DC內也能夠有多個ProxySQL集羣。這依賴於實際應用場景。惟一的限制是,每一個ProxySQL實例只能屬於單個ProxySQL集羣。ProxySQL集羣沒有名稱,爲了確保ProxySQL實例不會加入到錯誤的集羣中,可讓每一個ProxySQL集羣採用不一樣的集羣認證憑據。
8)如何引導啓動一個ProxySQL集羣?
很簡單:只需讓proxysql_servers表中多於一個節點便可。
9)ProxySQL集羣中的其它節點如何知道有新節點?
這個沒法自動知道,這是爲了防止新節點破壞集羣。一個新節點在加入集羣時,會馬上從集羣中拉取配置,但不會將本身做爲可信任的配置源通告出去。要讓其它節點知道有一個新的節點,只需向這些節點的proxysql_servers中加入新的節點信息,而後執行load proxysql servers to runtime便可。
2、ProxySQL Cluster + MGR 高可用集羣方案部署記錄
這裏針對MGR模式 (GTID模式也是同樣的) 部署ProxySQL Cluster雙節點集羣環境。能夠結合keepalived,利用vip漂移實現ProxySQL節點故障無感知切換的高可用集羣方案。
一、環境準備
172.16.60.211 MGR-node1 (master1) 172.16.60.212 MGR-node2 (master2) 172.16.60.213 MGR-node3 (master3) 172.16.60.214 ProxySQL-node1 172.16.60.220 ProxySQL-node2 [root@MGR-node1 ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core) 爲了方便實驗,關閉全部節點的防火牆 [root@MGR-node1 ~]# systemctl stop firewalld [root@MGR-node1 ~]# firewall-cmd --state not running [root@MGR-node1 ~]# cat /etc/sysconfig/selinux |grep "SELINUX=disabled" SELINUX=disabled [root@MGR-node1 ~]# setenforce 0 setenforce: SELinux is disabled [root@MGR-node1 ~]# getenforce Disabled 特別要注意一個關鍵點: 必須設置好各個mysql節點的主機名,而且保證能經過主機名找到各成員! 則必需要在每一個節點的/etc/hosts裏面作主機名綁定,不然後續將節點加入group組會失敗!報錯RECOVERING!! [root@MGR-node1 ~]# cat /etc/hosts ........ 172.16.60.211 MGR-node1 172.16.60.212 MGR-node2 172.16.60.213 MGR-node3
二、在三個MGR節點安裝Mysql5.7
在三個mysql節點機上使用yum方式安裝Mysql5.7,參考:https://www.cnblogs.com/kevingrace/p/8340690.html 安裝MySQL yum資源庫 [root@MGR-node1 ~]# yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm 安裝MySQL 5.7 [root@MGR-node1 ~]# yum install -y mysql-community-server 啓動MySQL服務器和MySQL的自動啓動 [root@MGR-node1 ~]# systemctl start mysqld.service [root@MGR-node1 ~]# systemctl enable mysqld.service 設置登陸密碼 因爲MySQL從5.7開始不容許首次安裝後使用空密碼進行登陸!爲了增強安全性,系統會隨機生成一個密碼以供管理員首次登陸使用, 這個密碼記錄在/var/log/mysqld.log文件中,使用下面的命令能夠查看此密碼: [root@MGR-node1 ~]# cat /var/log/mysqld.log|grep 'A temporary password' 2019-01-11T05:53:17.824073Z 1 [Note] A temporary password is generated for root@localhost: TaN.k:*Qw2xs 使用上面查看的密碼TaN.k:*Qw2xs 登陸mysql,並重置密碼爲123456 [root@MGR-node1 ~]# mysql -p #輸入默認的密碼:TaN.k:*Qw2xs ............. mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_length=1; Query OK, 0 rows affected (0.00 sec) mysql> set password=password("123456"); Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) 查看mysql版本 [root@MGR-node1 ~]# mysql -p123456 ........ mysql> select version(); +-----------+ | version() | +-----------+ | 5.7.24 | +-----------+ 1 row in set (0.00 sec) ===================================================================== 舒適提示 mysql5.7經過上面默認安裝後,執行語句可能會報錯: ERROR 1819 (HY000): Your password does not satisfy the current policy requirements 這個報錯與Mysql 密碼安全策略validate_password_policy的值有關,validate_password_policy能夠取0、一、2三個值: 解決辦法: set global validate_password_policy=0; set global validate_password_length=1;
三、MGR組複製環境部署 (多寫模式)
能夠參考:https://www.cnblogs.com/kevingrace/p/10260685.html 因爲以前作了其餘測試,這裏須要將三個節點的mysql環境抹乾淨: # systemctl stop mysqld # rm -rf /var/lib/mysql # systemctl start mysqld 而後重啓密碼 # cat /var/log/mysqld.log|grep 'A temporary password' # mysql -p123456 mysql> set global validate_password_policy=0; mysql> set global validate_password_length=1; mysql> set password=password("123456"); mysql> flush privileges; ======================================================= 1) MGR-node1節點操做 [root@MGR-node1 ~]# mysql -p123456 ......... mysql> select uuid(); +--------------------------------------+ | uuid() | +--------------------------------------+ | ae09faae-34bb-11e9-9f91-005056ac6820 | +--------------------------------------+ 1 row in set (0.00 sec) [root@MGR-node1 ~]# cp /etc/my.cnf /etc/my.cnf.bak [root@MGR-node1 ~]# >/etc/my.cnf [root@MGR-node1 ~]# vim /etc/my.cnf [mysqld] datadir = /var/lib/mysql socket = /var/lib/mysql/mysql.sock symbolic-links = 0 log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid #GTID: server_id = 1 gtid_mode = on enforce_gtid_consistency = on master_info_repository=TABLE relay_log_info_repository=TABLE binlog_checksum=NONE #binlog log_bin = mysql-bin log-slave-updates = 1 binlog_format = row sync-master-info = 1 sync_binlog = 1 #relay log skip_slave_start = 1 transaction_write_set_extraction=XXHASH64 loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820" loose-group_replication_start_on_boot=off loose-group_replication_local_address= "172.16.60.211:24901" loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901" loose-group_replication_bootstrap_group=off loose-group_replication_single_primary_mode=off loose-group_replication_enforce_update_everywhere_checks=on loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8" 重啓mysql服務 [root@MGR-node1 ~]# systemctl restart mysqld 登陸mysql進行相關設置操做 [root@MGR-node1 ~]# mysql -p123456 ............ mysql> SET SQL_LOG_BIN=0; Query OK, 0 rows affected (0.00 sec) mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) mysql> reset master; Query OK, 0 rows affected (0.19 sec) mysql> SET SQL_LOG_BIN=1; Query OK, 0 rows affected (0.00 sec) mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery'; Query OK, 0 rows affected, 2 warnings (0.33 sec) mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so'; Query OK, 0 rows affected (0.03 sec) mysql> SHOW PLUGINS; +----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+ ............... ............... | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | +----------------------------+----------+--------------------+----------------------+---------+ 46 rows in set (0.00 sec) mysql> SET GLOBAL group_replication_bootstrap_group=ON; Query OK, 0 rows affected (0.00 sec) mysql> START GROUP_REPLICATION; Query OK, 0 rows affected (2.34 sec) mysql> SET GLOBAL group_replication_bootstrap_group=OFF; Query OK, 0 rows affected (0.00 sec) mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 1 row in set (0.00 sec) 好比要保證上面的group_replication_applier的狀態爲"ONLINE"纔對! 建立一個測試庫 mysql> CREATE DATABASE kevin CHARACTER SET utf8 COLLATE utf8_general_ci; Query OK, 1 row affected (0.03 sec) mysql> use kevin; Database changed mysql> create table if not exists haha (id int(10) PRIMARY KEY AUTO_INCREMENT,name varchar(50) NOT NULL); Query OK, 0 rows affected (0.24 sec) mysql> insert into kevin.haha values(1,"wangshibo"),(2,"guohuihui"),(3,"yangyang"),(4,"shikui"); Query OK, 4 rows affected (0.07 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> select * from kevin.haha; +----+-----------+ | id | name | +----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | yangyang | | 4 | shikui | +----+-----------+ 4 rows in set (0.00 sec) ===================================================================== 2) MGR-node2節點操做 [root@MGR-node2 ~]# cp /etc/my.cnf /etc/my.cnf.bak [root@MGR-node2 ~]# >/etc/my.cnf [root@MGR-node2 ~]# vim /etc/my.cnf [mysqld] datadir = /var/lib/mysql socket = /var/lib/mysql/mysql.sock symbolic-links = 0 log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid #GTID: server_id = 2 gtid_mode = on enforce_gtid_consistency = on master_info_repository=TABLE relay_log_info_repository=TABLE binlog_checksum=NONE #binlog log_bin = mysql-bin log-slave-updates = 1 binlog_format = row sync-master-info = 1 sync_binlog = 1 #relay log skip_slave_start = 1 transaction_write_set_extraction=XXHASH64 loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820" loose-group_replication_start_on_boot=off loose-group_replication_local_address= "172.16.60.212:24901" loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901" loose-group_replication_bootstrap_group=off loose-group_replication_single_primary_mode=off loose-group_replication_enforce_update_everywhere_checks=on loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8" 重啓mysql服務 [root@MGR-node2 ~]# systemctl restart mysqld 登陸mysql進行相關設置操做 [root@MGR-node2 ~]# mysql -p123456 ......... mysql> SET SQL_LOG_BIN=0; Query OK, 0 rows affected (0.00 sec) mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> reset master; Query OK, 0 rows affected (0.17 sec) mysql> SET SQL_LOG_BIN=1; Query OK, 0 rows affected (0.00 sec) mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery'; Query OK, 0 rows affected, 2 warnings (0.21 sec) mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so'; Query OK, 0 rows affected (0.20 sec) mysql> SHOW PLUGINS; +----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+ ............. ............. | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | +----------------------------+----------+--------------------+----------------------+---------+ 46 rows in set (0.00 sec) mysql> START GROUP_REPLICATION; Query OK, 0 rows affected (6.25 sec) mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2 | 3306 | ONLINE | | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 2 rows in set (0.00 sec) 查看下,發現已經將MGR-node1節點添加的數據同步過來了 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | kevin | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> select * from kevin.haha; +----+-----------+ | id | name | +----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | yangyang | | 4 | shikui | +----+-----------+ 4 rows in set (0.00 sec) ===================================================================== 3) MGR-node3節點操做 [root@MGR-node3 ~]# cp /etc/my.cnf /etc/my.cnf.bak [root@MGR-node3 ~]# >/etc/my.cnf [root@MGR-node3 ~]# vim /etc/my.cnf [mysqld] datadir = /var/lib/mysql socket = /var/lib/mysql/mysql.sock symbolic-links = 0 log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid #GTID: server_id = 3 gtid_mode = on enforce_gtid_consistency = on master_info_repository=TABLE relay_log_info_repository=TABLE binlog_checksum=NONE #binlog log_bin = mysql-bin log-slave-updates = 1 binlog_format = row sync-master-info = 1 sync_binlog = 1 #relay log skip_slave_start = 1 transaction_write_set_extraction=XXHASH64 loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820" loose-group_replication_start_on_boot=off loose-group_replication_local_address= "172.16.60.213:24901" loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901" loose-group_replication_bootstrap_group=off loose-group_replication_single_primary_mode=off loose-group_replication_enforce_update_everywhere_checks=on loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8" 重啓mysql服務 [root@MGR-node3 ~]# systemctl restart mysqld 登陸mysql進行相關設置操做 [root@MGR-node3 ~]# mysql -p123456 .......... mysql> SET SQL_LOG_BIN=0; Query OK, 0 rows affected (0.00 sec) mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) mysql> reset master; Query OK, 0 rows affected (0.10 sec) mysql> SET SQL_LOG_BIN=1; Query OK, 0 rows affected (0.00 sec) mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery'; Query OK, 0 rows affected, 2 warnings (0.27 sec) mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so'; Query OK, 0 rows affected (0.04 sec) mysql> SHOW PLUGINS; +----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+ ............. | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | +----------------------------+----------+--------------------+----------------------+---------+ 46 rows in set (0.00 sec) mysql> START GROUP_REPLICATION; Query OK, 0 rows affected (4.54 sec) mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2 | 3306 | ONLINE | | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1 | 3306 | ONLINE | | group_replication_applier | 456216bd-34bb-11e9-bbd1-005056880888 | MGR-node3 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec) 查看下,發現已經將在其餘節點上添加的數據同步過來了 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | kevin | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> select * from kevin.haha; +----+-----------+ | id | name | +----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | yangyang | | 4 | shikui | +----+-----------+ 4 rows in set (0.00 sec) ===================================================================== 4) 組複製數據同步測試 在任意一個節點上執行 mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 2658b203-1565-11e9-9f8b-005056880888 | MGR-node3 | 3306 | ONLINE | | group_replication_applier | 2c1efc46-1565-11e9-ab8e-00505688047c | MGR-node2 | 3306 | ONLINE | | group_replication_applier | 317e2aad-1565-11e9-9c2e-005056ac6820 | MGR-node1 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec) 如上,說明已經在MGR-node一、MGR-node二、MGR-node3 三個節點上成功部署了基於GTID的組複製同步環境。 如今在三個節點中的任意一個上面更新數據,那麼其餘兩個節點的數據庫都會將新數據同步過去的! 1)在MGR-node1節點數據庫更新數據 mysql> delete from kevin.haha where id>2; Query OK, 2 rows affected (0.14 sec) 接着在MGR-node二、MGR-node3節點數據庫查看,發現更新後數據已經同步過來了! mysql> select * from kevin.haha; +----+-----------+ | id | name | +----+-----------+ | 1 | wangshibo | | 2 | guohuihui | +----+-----------+ 2 rows in set (0.00 sec) 2)在MGR-node2節點數據庫更新數據 mysql> insert into kevin.haha values(11,"beijing"),(12,"shanghai"),(13,"anhui"); Query OK, 3 rows affected (0.06 sec) Records: 3 Duplicates: 0 Warnings: 0 接着在MGR-node一、MGR-node3節點數據庫查看,發現更新後數據已經同步過來了! mysql> select * from kevin.haha; +----+-----------+ | id | name | +----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 11 | beijing | | 12 | shanghai | | 13 | anhui | +----+-----------+ 5 rows in set (0.00 sec) 3)在MGR-node3節點數據庫更新數據 mysql> update kevin.haha set id=100 where name="anhui"; Query OK, 1 row affected (0.16 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> delete from kevin.haha where id=12; Query OK, 1 row affected (0.22 sec) 接着在MGR-node一、MGR-node2節點數據庫查看,發現更新後數據已經同步過來了! mysql> select * from kevin.haha; +-----+-----------+ | id | name | +-----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 11 | beijing | | 100 | anhui | +-----+-----------+ 4 rows in set (0.00 sec)
4、ProxySQL 安裝、讀寫分離配置、集羣部署
1) 兩個ProxySQL節點均安裝mysql客戶端,用於在本機鏈接到ProxySQL的管理接口
[root@ProxySQL-node1 ~]# vim /etc/yum.repos.d/mariadb.repo [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/10.3.5/centos6-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1 安裝mysql-clinet客戶端 [root@ProxySQL-node1 ~]# yum install -y MariaDB-client ============================================================================ 若是遇到報錯: Error: MariaDB-compat conflicts with 1:mariadb-libs-5.5.60-1.el7_5.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 解決辦法: [root@ProxySQL-node1 ~]# rpm -qa|grep mariadb mariadb-libs-5.5.60-1.el7_5.x86_64 [root@ProxySQL-node1 ~]# rpm -e mariadb-libs-5.5.60-1.el7_5.x86_64 --nodeps [root@ProxySQL-node1 ~]# yum install -y MariaDB-client
2) 兩個ProxySQL實例節點均安裝ProxySQL
proxysql的rpm包下載地址: https://pan.baidu.com/s/1S1_b5DKVCpZSOUNmtCXrrg 提取密碼: 5t1c proxysql各版本下載:https://github.com/sysown/proxysql/releases [root@ProxySQL-node ~]# yum install -y perl-DBI perl-DBD-MySQL [root@ProxySQL-node ~]# rpm -ivh proxysql-1.4.8-1-centos7.x86_64.rpm --force 啓動proxysql (或者使用"/etc/init.d/proxysql start"命令啓動) [root@ProxySQL-node ~]# systemctl start proxysql [root@ProxySQL-node ~]# systemctl restart proxysql [root@ProxySQL-node ~]# ss -lntup|grep proxy tcp LISTEN 0 128 *:6080 *:* users:(("proxysql",pid=29931,fd=11)) tcp LISTEN 0 128 *:6032 *:* users:(("proxysql",pid=29931,fd=28)) tcp LISTEN 0 128 *:6033 *:* users:(("proxysql",pid=29931,fd=27)) tcp LISTEN 0 128 *:6033 *:* users:(("proxysql",pid=29931,fd=26)) tcp LISTEN 0 128 *:6033 *:* users:(("proxysql",pid=29931,fd=25)) tcp LISTEN 0 128 *:6033 *:* users:(("proxysql",pid=29931,fd=24)) [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 ............ ............ MySQL [(none)]> show databases; +-----+---------------+-------------------------------------+ | seq | name | file | +-----+---------------+-------------------------------------+ | 0 | main | | | 2 | disk | /var/lib/proxysql/proxysql.db | | 3 | stats | | | 4 | monitor | | | 5 | stats_history | /var/lib/proxysql/proxysql_stats.db | +-----+---------------+-------------------------------------+ 5 rows in set (0.000 sec) 接着初始化Proxysql,將以前的proxysql數據都刪除 MySQL [(none)]> delete from scheduler ; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> delete from mysql_servers; Query OK, 3 rows affected (0.000 sec) MySQL [(none)]> delete from mysql_users; Query OK, 1 row affected (0.000 sec) MySQL [(none)]> delete from mysql_query_rules; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> delete from mysql_group_replication_hostgroups ; Query OK, 1 row affected (0.000 sec) MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK; Query OK, 94 rows affected (0.175 sec) MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.003 sec) MySQL [(none)]> SAVE MYSQL SERVERS TO DISK; Query OK, 0 rows affected (0.140 sec) MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> SAVE MYSQL USERS TO DISK; Query OK, 0 rows affected (0.050 sec) MySQL [(none)]> LOAD SCHEDULER TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.096 sec) MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK; Query OK, 0 rows affected (0.156 sec) MySQL [(none)]>
3)配置ProxySQL Cluster
1)首先在172.16.60.21四、172.16.60.220兩個實例節點上均配置proxysql.cnf文件 [root@ProxySQL-node1 ~]# cp /etc/proxysql.cnf /etc/proxysql.cnf.bak [root@ProxySQL-node1 ~]# vim /etc/proxysql.cnf ............... ............... # 須要更改的部分 admin_variables= { admin_credentials="admin:admin;cluster_kevin:123456" #配置用於proxysql cluster實例節點間通信的帳號 # mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock" mysql_ifaces="0.0.0.0:6032" #全網開放登陸 # refresh_interval=2000 # debug=true cluster_username="cluster_kevin" #集羣用戶名稱,與最上面的相同 cluster_password="123456" #集羣用戶密碼,與最上面的相同 cluster_check_interval_ms=200 cluster_check_status_frequency=100 cluster_mysql_query_rules_save_to_disk=true cluster_mysql_servers_save_to_disk=true cluster_mysql_users_save_to_disk=true cluster_proxysql_servers_save_to_disk=true cluster_mysql_query_rules_diffs_before_sync=3 cluster_mysql_servers_diffs_before_sync=3 cluster_mysql_users_diffs_before_sync=3 cluster_proxysql_servers_diffs_before_sync=3 } proxysql_servers = #在這個部分提早定義好集羣的成員 ( { hostname="172.16.60.214" port=6032 weight=1 comment="ProxySQL-node1" }, { hostname="172.16.60.220" port=6032 weight=1 comment="ProxySQL-node2" } ) ............... ............... 將proxysql.cnf配置文件拷貝覆蓋到另一個實例節點上 [root@ProxySQL-node1 ~]# rsync -e "ssh -p22" -avpgolr /etc/proxysql.cnf root@172.16.60.220:/etc/ 2)重啓172.16.60.214和172.16.60.220兩個實例節點的proxysql服務 (注意:暫且不要重啓172.16.60.221實例節點的proxysql服務) 這裏要特別注意: 若是存在若是存在"proxysql.db"文件(在/var/lib/proxysql目錄下),則ProxySQL服務只有在第一次啓動時纔會去讀取proxysql.cnf文件並解析; 後面啓動會就不會讀取proxysql.cnf文件了!若是想要讓proxysql.cnf文件裏的配置在重啓proxysql服務後生效(即想要讓proxysql重啓時讀取並 解析proxysql.cnf配置文件),則須要先刪除/var/lib/proxysql/proxysql.db數據庫文件,而後再重啓proxysql服務。這樣就至關於初始化啓動 proxysql服務了,會再次生產一個純淨的proxysql.db數據庫文件(若是以前配置了proxysql相關路由規則等,則就會被抹掉)。 重啓第一個實例節點172.16.60.214的proxysql服務,重啓時要讀取並解析proxysql.cnf配置文件 [root@ProxySQL-node1 ~]# rm -rf /var/lib/proxysql/proxysql.db [root@ProxySQL-node1 ~]# ll /var/lib/proxysql/proxysql.db ls: cannot access /var/lib/proxysql/proxysql.db: No such file or directory [root@ProxySQL-node1 ~]# systemctl restart proxysql [root@ProxySQL-node1 ~]# ll /var/lib/proxysql/proxysql.db -rw------- 1 root root 122880 Feb 25 14:42 /var/lib/proxysql/proxysql.db 重啓第二個實例節點172.16.60.220的proxysql服務,重啓時要讀取並解析proxysql.cnf配置文件 [root@ProxySQL-node2 ~]# rm -rf /var/lib/proxysql/proxysql.db [root@ProxySQL-node2 ~]# ll /var/lib/proxysql/proxysql.db ls: cannot access /var/lib/proxysql/proxysql.db: No such file or directory [root@ProxySQL-node2 ~]# systemctl restart proxysql [root@ProxySQL-node2 ~]# ll /var/lib/proxysql/proxysql.db -rw------- 1 root root 122880 Feb 25 14:43 /var/lib/proxysql/proxysql.db 3)觀察集羣情況 (在172.16.60.214和172.16.60.220節點上均可以查看) [root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 ............. MySQL [(none)]> select * from proxysql_servers; +---------------+------+--------+----------------+ | hostname | port | weight | comment | +---------------+------+--------+----------------+ | 172.16.60.214 | 6032 | 1 | ProxySQL-node1 | | 172.16.60.220 | 6032 | 1 | ProxySQL-node2 | +---------------+------+--------+----------------+ 2 rows in set (0.000 sec) MySQL [(none)]> select * from stats_proxysql_servers_metrics; +---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+ | hostname | port | weight | comment | response_time_ms | Uptime_s | last_check_ms | Queries | Client_Connections_connected | Client_Connections_created | +---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+ | 172.16.60.220 | 6032 | 1 | ProxySQL-node2 | 1 | 82 | 1226 | 0 | 0 | 0 | | 172.16.60.214 | 6032 | 1 | ProxySQL-node1 | 1 | 80 | 675 | 0 | 0 | 0 | +---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+ 2 rows in set (0.001 sec) MySQL [(none)]> select hostname,port,comment,Uptime_s,last_check_ms from stats_proxysql_servers_metrics; +---------------+------+----------------+----------+---------------+ | hostname | port | comment | Uptime_s | last_check_ms | +---------------+------+----------------+----------+---------------+ | 172.16.60.220 | 6032 | ProxySQL-node2 | 102 | 9064 | | 172.16.60.214 | 6032 | ProxySQL-node1 | 100 | 8526 | +---------------+------+----------------+----------+---------------+ 2 rows in set (0.000 sec) MySQL [(none)]> select hostname,name,checksum,updated_at from stats_proxysql_servers_checksums; +---------------+-------------------+--------------------+------------+ | hostname | name | checksum | updated_at | +---------------+-------------------+--------------------+------------+ | 172.16.60.220 | admin_variables | | 1551109910 | | 172.16.60.220 | mysql_query_rules | 0x0000000000000000 | 1551109910 | | 172.16.60.220 | mysql_servers | 0x0000000000000000 | 1551109910 | | 172.16.60.220 | mysql_users | 0x0000000000000000 | 1551109910 | | 172.16.60.220 | mysql_variables | | 1551109910 | | 172.16.60.220 | proxysql_servers | 0x7D769422A4719C2F | 1551109910 | | 172.16.60.214 | admin_variables | | 1551109910 | | 172.16.60.214 | mysql_query_rules | 0x0000000000000000 | 1551109910 | | 172.16.60.214 | mysql_servers | 0x0000000000000000 | 1551109910 | | 172.16.60.214 | mysql_users | 0x0000000000000000 | 1551109910 | | 172.16.60.214 | mysql_variables | | 1551109910 | | 172.16.60.214 | proxysql_servers | 0x7D769422A4719C2F | 1551109910 | +---------------+-------------------+--------------------+------------+ 12 rows in set (0.001 sec)
4)在第一個實例節點 172.16.60.214 上配置MGR的讀寫分離和主節點故障無感知環境
參考:https://www.cnblogs.com/kevingrace/p/10384691.html 1)在數據庫端創建proxysql登入須要的賬號 (在三個MGR任意一個節點上操做,會自動同步到其餘節點) [root@MGR-node1 ~]# mysql -p123456 ......... mysql> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'proxysql'; Query OK, 0 rows affected (0.07 sec) mysql> GRANT ALL ON * . * TO 'proxysql'@'%'; Query OK, 0 rows affected (0.06 sec) mysql> create user 'sbuser'@'%' IDENTIFIED BY 'sbpass'; Query OK, 0 rows affected (0.05 sec) mysql> GRANT ALL ON * . * TO 'sbuser'@'%'; Query OK, 0 rows affected (0.08 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.07 sec) 2)建立檢查MGR節點狀態的函數和視圖 (在三個MGR任意一個節點上操做,會自動同步到其餘節點) 在MGR-node1節點上,建立系統視圖sys.gr_member_routing_candidate_status,該視圖將爲ProxySQL提供組複製相關的監控狀態指標。 下載addition_to_sys.sql腳本,在MGR-node1節點執行以下語句導入MySQL便可 (在mgr-node1節點的mysql執行後,會同步到其餘兩個節點上)。 下載地址: https://pan.baidu.com/s/1bNYHtExy2fmqwvEyQS3sWg 提取密碼:wst7 導入addition_to_sys.sql文件數據 [root@MGR-node1 ~]# mysql -p123456 < /root/addition_to_sys.sql mysql: [Warning] Using a password on the command line interface can be insecure. 在三個mysql節點上能夠查看該視圖: [root@MGR-node1 ~]# mysql -p123456 ............ mysql> select * from sys.gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | YES | NO | 0 | 0 | +------------------+-----------+---------------------+----------------------+ 1 row in set (0.01 sec) 3)在proxysql中增長賬號 [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 ........... MySQL [(none)]> INSERT INTO MySQL_users(username,password,default_hostgroup) VALUES ('proxysql','proxysql',1); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_username'; Query OK, 1 row affected (0.001 sec) MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_password'; Query OK, 1 row affected (0.002 sec) MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.006 sec) MySQL [(none)]> SAVE MYSQL SERVERS TO DISK; Query OK, 0 rows affected (0.387 sec) 4) 配置proxysql [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 ............. MySQL [(none)]> delete from mysql_servers; Query OK, 3 rows affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.211',3306); Query OK, 1 row affected (0.001 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.212',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.213',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.211',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.212',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.213',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> select * from mysql_servers ; +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.001 sec) 須要確認一下沒有使用proxysql的讀寫分離規則(由於以前測試中配置了這個地方,因此須要刪除,以避免影響後面的測試)。 MySQL [(none)]> delete from mysql_query_rules; Query OK, 2 rows affected (0.000 sec) MySQL [(none)]> commit; Query OK, 0 rows affected (0.000 sec) 最後須要將global_variables,mysql_servers、mysql_users表的信息加載到RUNTIME,更進一步加載到DISK: MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME; Query OK, 0 rows affected (0.001 sec) MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK; Query OK, 94 rows affected (0.080 sec) MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.003 sec) MySQL [(none)]> SAVE MYSQL SERVERS TO DISK; Query OK, 0 rows affected (0.463 sec) MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.001 sec) MySQL [(none)]> SAVE MYSQL USERS TO DISK; Query OK, 0 rows affected (0.134 sec) 接着測試一下可否正常登入數據庫 (測試命令執行屢次,則會登陸到不一樣的mysql節點上) [root@ProxySQL-node1 ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname" +------------+ | @@hostname | +------------+ | MGR-node1 | +------------+ 5)配置scheduler 首先,請在Github地址https://github.com/ZzzCrazyPig/proxysql_groupreplication_checker下載相應的腳本 這個地址有三個腳本可供下載: proxysql_groupreplication_checker.sh:用於multi-primary模式,能夠實現讀寫分離,以及故障切換,同一時間點多個節點能夠多寫; gr_mw_mode_cheker.sh:用於multi-primary模式,能夠實現讀寫分離,以及故障切換,不過在同一時間點只能有一個節點能寫; gr_sw_mode_checker.sh:用於single-primary模式,能夠實現讀寫分離,以及故障切換; 因爲這裏實驗的環境是multi-primary模式,因此選擇proxysql_groupreplication_checker.sh腳本。 三個腳本我已打包放在了百度雲盤上,下載地址:https://pan.baidu.com/s/1lUzr58BSA_U7wmYwsRcvzQ 提取密碼:9rm7 將下載的腳本proxysql_groupreplication_checker.sh放到目錄/var/lib/proxysql/下,並增長能夠執行的權限: [root@ProxySQL-node ~]# chmod a+x /var/lib/proxysql/proxysql_groupreplication_checker.sh [root@ProxySQL-node ~]# ll /var/lib/proxysql/proxysql_groupreplication_checker.sh -rwxr-xr-x 1 root root 6081 Feb 20 14:25 /var/lib/proxysql/proxysql_groupreplication_checker.sh ==================================================================== 特別注意: proxysql_groupreplication_checker.sh 監控腳本要在三個節點上都要下載,並各自放到/var/lib/proxysql目錄下 MySQL [(none)]> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5) VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log'); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> select * from scheduler; +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | id | active | interval_ms | filename | arg1 | arg2 | arg3 | arg4 | arg5 | comment | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | 1 | 1 | 10000 | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1 | 2 | 1 | 0 | /var/lib/proxysql/proxysql_groupreplication_checker.log | | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ 1 row in set (0.000 sec) MySQL [(none)]> LOAD SCHEDULER TO RUNTIME; Query OK, 0 rows affected (0.001 sec) MySQL [(none)]> SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.118 sec) ============================================================================== scheduler各column的說明: active : 1: enable scheduler to schedule the script we provide interval_ms : invoke one by one in cycle (eg: 5000(ms) = 5s represent every 5s invoke the script) filename: represent the script file path arg1~arg5: represent the input parameters the script received 腳本proxysql_groupreplication_checker.sh對應的參數說明以下: arg1 is the hostgroup_id for write arg2 is the hostgroup_id for read arg3 is the number of writers we want active at the same time arg4 represents if we want that the member acting for writes is also candidate for reads arg5 is the log file schedule信息加載後,就會分析當前的環境,mysql_servers中顯示出當前只有172.16.60.211是能夠寫的, 172.16.60.212以及172.16.60.213是用來讀的。 MySQL [(none)]> select * from mysql_servers ; //上面操做後,稍等一下子後執行此命令纔會有下面的結果 +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 由於schedule的arg4,我這裏設爲了0,就表示可寫的節點不能用於讀。那我將arg4設置爲1試一下: MySQL [(none)]> update scheduler set arg4=1; Query OK, 1 row affected (0.000 sec) MySQL [(none)]> select * from scheduler; +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | id | active | interval_ms | filename | arg1 | arg2 | arg3 | arg4 | arg5 | comment | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | 1 | 1 | 10000 | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1 | 2 | 1 | 1 | /var/lib/proxysql/proxysql_groupreplication_checker.log | | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ 1 row in set (0.000 sec) MySQL [(none)]> SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.286 sec) MySQL [(none)]> LOAD SCHEDULER TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> select * from mysql_servers; //上面操做後,稍微等一下子執行此命令纔會有下面的結果 +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) arg4設置爲1以後,172.16.60.211節點用來寫的同時,也能夠被用來讀。 便於下面的測試仍是將arg4設爲0: MySQL [(none)]> update scheduler set arg4=0; Query OK, 1 row affected (0.000 sec) MySQL [(none)]> SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.197 sec) MySQL [(none)]> LOAD SCHEDULER TO RUNTIME; Query OK, 0 rows affected (0.000 sec) MySQL [(none)]> select * from mysql_servers; //稍微等一下子執行此命令,纔會有下面的結果 +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SORT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 各個節點的gr_member_routing_candidate_status視圖也顯示了當前節點是不是正常狀態的, proxysql就是讀取的這個視圖的信息來決定此節點是否可用。 [root@MGR-node1 ~]# mysql -p123456 ........... mysql> select * from sys.gr_member_routing_candidate_status\G; *************************** 1. row *************************** viable_candidate: YES read_only: NO transactions_behind: 0 transactions_to_cert: 0 1 row in set (0.00 sec) ERROR: No query specified 6)設置讀寫分離 MySQL [(none)]> insert into mysql_query_rules (active, match_pattern, destination_hostgroup, apply) values (1,"^SELECT",2,1); Query OK, 1 row affected (0.001 sec) MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.001 sec) MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK; Query OK, 0 rows affected (0.264 sec) 對於for update須要在gruop1上執行,能夠加上規則: MySQL [(none)]> insert into mysql_query_rules(active,match_pattern,destination_hostgroup,apply) values(1,'^SELECT.*FOR UPDATE$',1,1); Query OK, 1 row affected (0.001 sec) 在proxysql本機或其餘客戶機上檢查下,select 語句,一直鏈接的是172.16.60.212和172.16.60.213 [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node3 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node2 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node2 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node3 | +------------+ 7)驗證數據的讀寫分離效果 [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" +------------+ | @@hostname | +------------+ | MGR-node2 | +------------+ [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha" +-----+-----------+ | id | name | +-----+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 11 | beijing | | 100 | anhui | +-----+-----------+ [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=1;" [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=2;" [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha" +-----+---------+ | id | name | +-----+---------+ | 11 | beijing | | 100 | anhui | +-----+---------+ [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e 'insert into kevin.haha values(21,"zhongguo"),(22,"xianggang"),(23,"taiwan");' [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha" +-----+-----------+ | id | name | +-----+-----------+ | 11 | beijing | | 21 | zhongguo | | 22 | xianggang | | 23 | taiwan | | 100 | anhui | 最後在proxysql管理端查看讀寫分離狀況 [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032 .......... MySQL [(none)]> select hostgroup,username,digest_text,count_star from stats_mysql_query_digest; +-----------+----------+------------------------------------------------------+------------+ | hostgroup | username | digest_text | count_star | +-----------+----------+------------------------------------------------------+------------+ | 1 | proxysql | insert into kevin.haha values(?,?),(?,?),(?,?) | 1 | | 1 | proxysql | insert into kevin.haha values(?,yangyang) | 1 | | 1 | proxysql | delete from kevin.haha where id=? | 2 | | 1 | proxysql | select @@version_comment limit ? | 120 | | 1 | proxysql | KILL ? | 8 | | 1 | proxysql | select @@hostname | 11 | | 1 | proxysql | KILL QUERY ? | 10 | | 2 | proxysql | select @@hostname, sleep(?) | 53 | | 1 | proxysql | insert into kevin.haha values(?,yangyang),(?,shikui) | 2 | | 1 | proxysql | show databases | 1 | | 2 | proxysql | select @@hostname | 31 | | 2 | proxysql | select * from kevin.haha | 4 | | 1 | proxysql | insert into kevin.haha values(?,wawa) | 3 | +-----------+----------+------------------------------------------------------+------------+ 13 rows in set (0.002 sec) MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 經過上面能夠看到: 寫操做都分配到了group1組內,即寫操做分配到172.16.60.211節點上。 讀操做都分配到了group2組內,即讀操做分配到172.16.60.2十二、172.16.60.213節點上。 8)設置故障應用無感應 在上面的讀寫分離規則中,我設置了172.16.60.211爲可寫節點,172.16.60.212,172.16.60.213爲只讀節點 若是此時172.16.60.211變成只讀模式的話,應用能不能直接連到其它的節點進行寫操做? 現手動將172.16.60.211變成只讀模式: [root@MGR-node1 ~]# mysql -p123456 ........ mysql> set global read_only=1; Query OK, 0 rows affected (0.00 sec) 接着觀察一下mysql_servers的狀態,自動將group1的172.16.60.212改爲了online,group2的172.16.60.211, 172.16.60.213變成online了,就表示將172.16.60.212變爲可寫節點,其它兩個節點變爲只讀節點了。 [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032 ........ MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.001 sec) 經過模擬的鏈接也能夠看到select語句都鏈接到172.16.60.211和172.16.60.213進行了。 (模擬時能夠稍微間隔一段時間,快速測試可能會鏈接同一個讀節點) [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node3 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node1 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node3 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node1 | +------------+ [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" mysql: [Warning] Using a password on the command line interface can be insecure. +------------+ | @@hostname | +------------+ | MGR-node1 | +------------+ 而後再將將172.16.60.211變爲可寫模式後,mysql_servers也恢復過來了。 [root@MGR-node1 ~]# mysql -p123456 ........ mysql> set global read_only=0; Query OK, 0 rows affected (0.00 sec) 接着觀察一下mysql_servers的狀態 [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032 ......... MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 通過測試將172.16.60.211節點中止組複製(stop group_replication)或者該節點宕機(mysql服務掛掉)後,mysql_servers表的信息也會正常的切換新的節點。 待172.16.60.211恢復再加入到組複製後,mysql_servers也會正常的將172.16.60.211改爲online狀態。
5)觀察以上配置的ProxySQL集羣中兩個實例之間(172.16.60.214和271.16.60.220)的數據同步
1) 登陸到172.16.60.220實例節點的proxysql管理端口。 會發現上面在172.16.60.214實例節點上配置的mysql規則都同步到本身上面來了。 [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 .......... MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 2) 因爲集羣配置裏沒有scheduler配置,因此這裏須要在172.16.60.220節點上手動配置下: proxysql_groupreplication_checker.sh 文件已經提早下載放到了/var/lib/proxysql目錄下了 [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 .......... MySQL [(none)]> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5) VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log'); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> select * from scheduler; +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | id | active | interval_ms | filename | arg1 | arg2 | arg3 | arg4 | arg5 | comment | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ | 1 | 1 | 10000 | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1 | 2 | 1 | 0 | /var/lib/proxysql/proxysql_groupreplication_checker.log | | +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+ 1 row in set (0.000 sec) MySQL [(none)]> LOAD SCHEDULER TO RUNTIME; Query OK, 0 rows affected (0.001 sec) MySQL [(none)]> SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.099 sec) 這樣在第二個實例節點172.16.60.220上也能夠實現mysql節點的故障無感應切換了 這樣一個簡單的"1+1"模式的proxy cluster集羣環境就配置好了。在集羣中的172.16.60.21四、172.16.60.220兩個實例節點的任意一個節點上添加 讀寫分離規則,則配置信息都會被同步到集羣中的其餘實例節點上!! 3) 好比測試下在集羣中第二個實例節點172.16.60.220上添加測試數據 [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 ............ ............ #原有數據 MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.001 sec) #插入新測試數據 MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.202',3306); Query OK, 1 row affected (0.000 sec) MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.202',3306); Query OK, 1 row affected (0.000 sec) # 持久化,並加載到運行環境中 MySQL [(none)]> save mysql servers to disk; Query OK, 0 rows affected (0.197 sec) MySQL [(none)]> load mysql servers to runtime; Query OK, 0 rows affected (0.006 sec) 4) 而後到另外一臺節點172.16.60.214上觀察數據 [root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032 ............ ............ MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 8 rows in set (0.001 sec) MySQL [(none)]> select * from runtime_mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 8 rows in set (0.003 sec) # 能夠看到在172.16.60.220節點上新插入的測試數據,已經被更新到172.16.60.214實例中的memory和runtime環境中。 # 注意:數據差別檢查是根據runtime進行檢查的,只對memory和disk進行更改,並不觸發同步操做。 5) 查看172.16.60.214實例節點的ProxySQL日誌 [root@ProxySQL-node1 ~]# tail -10000 /var/lib/proxysql/proxysql.log ........... ........... #檢測到172.16.60.220實例傳來的新配置文件校驗值 2019-02-25 15:31:24 [INFO] Cluster: detected a new checksum for mysql_servers from peer 172.16.60.214:6032, version 841, epoch 1551079884, checksum 0x8C28F2C5130ACBAE . Not syncing yet ... #根據傳來的配置校驗值,版本號,時間戳,與本身的版本進行比較,決定進行同步操做 2019-02-25 15:31:24 [INFO] Cluster: checksum for mysql_servers from peer 172.16.60.214:6032 matches with local checksum 0x8C28F2C5130ACBAE , we won't sync. .......... .......... #從遠端獲取新的差別配置信息 2019-02-25 15:32:26 [INFO] Cluster: Fetching MySQL Servers from peer 172.16.60.220:6032 completed 2019-02-25 15:32:26 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.16.60.220:6032 before proceessing #獲取完信息後,本地進行校驗,並請求遠端校驗值進行比較 2019-02-25 15:32:26 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.16.60.220:6032 successful. Checksum: 0xC18FFC0511F726C9 ........... ........... #開始寫mysql_servers表 2019-02-25 15:32:26 [INFO] Cluster: Writing mysql_servers table 2019-02-25 15:32:26 [INFO] Cluster: Writing mysql_replication_hostgroups table #將剛剛接收並保存到memory的配置加載到runtime環境中 2019-02-25 15:32:47 [INFO] Cluster: Loading to runtime MySQL Servers from peer 172.16.60.220:6032 2019-02-25 15:32:47 [INFO] Cluster: Saving to disk MySQL Servers from peer 172.16.60.220:6032 ........... ........... #先輸出以前本身的配置信息 2019-02-25 15:36:13 [INFO] Dumping mysql_servers +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ | hostgroup_id | hostname | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | mem_pointer | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ | 1 | 172.16.60.211 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134116992 | | 2 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117760 | | 2 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117632 | | 2 | 172.16.60.211 | 3306 | 1 | 2 | 0 | 1000 | 0 | 0 | 0 | | 139810134117504 | | 1 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117376 | | 1 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117248 | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ ................... ................... #再輸出一遍更新傳來的的配置信息 2019-02-25 15:36:13 [INFO] Dumping mysql_servers_incoming +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.202 | 3306 | 1 | 2 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.202 | 3306 | 1 | 2 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2019-02-25 15:36:13 [INFO] New mysql_replication_hostgroups table 2019-02-25 15:36:13 [INFO] New mysql_group_replication_hostgroups table 2019-02-25 15:36:13 [INFO] Dumping current MySQL Servers structures for hostgroup ALL HID: 1 , address: 172.16.60.211 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 1 , address: 172.16.60.212 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 1 , address: 172.16.60.213 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 1 , address: 172.16.60.202 , port: 3306 , weight: 1 , status: OFFLINE_SOFT , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 2 , address: 172.16.60.211 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 2 , address: 172.16.60.212 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 2 , address: 172.16.60.213 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 2 , address: 172.16.60.202 , port: 3306 , weight: 1 , status: OFFLINE_SOFT , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: #最後輸出一遍本身更新後的信息 2019-02-25 15:36:13 [INFO] Dumping mysql_servers +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ | hostgroup_id | hostname | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | mem_pointer | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ | 1 | 172.16.60.211 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134116992 | | 2 | 172.16.60.202 | 3306 | 1 | 2 | 0 | 1000 | 0 | 0 | 0 | | 139810132824192 | | 2 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117760 | | 2 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117632 | | 2 | 172.16.60.211 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117504 | | 1 | 172.16.60.202 | 3306 | 1 | 2 | 0 | 1000 | 0 | 0 | 0 | | 139810132824320 | | 1 | 172.16.60.213 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117376 | | 1 | 172.16.60.212 | 3306 | 1 | 0 | 0 | 1000 | 0 | 0 | 0 | | 139810134117248 | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+ 2019-02-25 15:36:13 [INFO] Cluster: detected a new checksum for mysql_servers from peer 172.16.60.214:6032, version 1031, epoch 1551080173, checksum 0x9715B5645359B3BD . Not syncing yet ... 2019-02-25 15:36:13 [INFO] Cluster: checksum for mysql_servers from peer 172.16.60.214:6032 matches with local checksum 0x9715B5645359B3BD , we won't sync. 6) 因爲上面在172.16.60.220實例插入的是測試數據。 這裏在172.16.60.214實例上刪除這個測試數據,一樣也是同步到172.16.60.220實例上 (或是在172.16.60.220節點上刪除這個測試數據,也是會同步到另外一個節點上) [root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032 .................. .................. MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 8 rows in set (0.000 sec) MySQL [(none)]> delete from mysql_servers where hostname="172.16.60.202"; Query OK, 2 rows affected (0.000 sec) MySQL [(none)]> save mysql servers to disk; Query OK, 0 rows affected (0.233 sec) MySQL [(none)]> load mysql servers to runtime; Query OK, 0 rows affected (0.004 sec) 7) 到172.16.60.220實例上查看,發現刪除的數據也同步過來了 [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603 ............... ............... MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 6 rows in set (0.000 sec) 一樣能夠查看172.16.60.220實例的proxysql日誌 [root@ProxySQL-node2 ~]# tail -f /var/lib/proxysql/proxysql.log ............. ............. ========================================================================================== 注意上面的一個問題: 第二個實例節點172.16.60.220上查看的"status"狀態都是"OFFLINE_SOFT"。 查看它的proxysql.log日誌: [root@ProxySQL-node2 ~]# tail -f /var/lib/proxysql/proxysql.log|grep -i error|grep -v 172.16.60.214 2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.211:3306 is returning "Access denied" for monitoring user 2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.212:3306 is returning "Access denied" for monitoring user 2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.213:3306 is returning "Access denied" for monitoring user [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603 ............... ............... MySQL [(none)]> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 10; +---------------+------+------------------+-------------------------+------------------------------------------------------------------------+ | hostname | port | time_start_us | connect_success_time_us | connect_error | +---------------+------+------------------+-------------------------+------------------------------------------------------------------------+ | 172.16.60.213 | 3306 | 1551112664169293 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.212 | 3306 | 1551112664161534 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.211 | 3306 | 1551112664153844 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.213 | 3306 | 1551112604169034 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.212 | 3306 | 1551112604161305 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.211 | 3306 | 1551112604153591 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.213 | 3306 | 1551112544169298 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | | 172.16.60.212 | 3306 | 1551112544161558 | 0 | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) | +---------------+------+------------------+-------------------------+------------------------------------------------------------------------+ 10 rows in set (0.000 sec) 從上面的錯誤日誌上看出是權限的問題,proxysql的monitoring user用戶沒有足夠的權限讀取數據。解決辦法以下: MySQL [(none)]> select * from global_variables; ......... ......... | mysql-monitor_username | monitor | | mysql-monitor_password | monitor | .......... .......... MySQL [(none)]> select * from MySQL_users; +----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+ | username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | +----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+ | proxysql | *BF27B4C7AAD278126E228AA8427806E870F64F39 | 1 | 0 | 1 | | 0 | 1 | 0 | 0 | 1 | 10000 | | proxysql | *BF27B4C7AAD278126E228AA8427806E870F64F39 | 1 | 0 | 1 | | 0 | 1 | 0 | 1 | 0 | 10000 | +----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+ 2 rows in set (0.000 sec) 原來同步到第二個實例節點172.16.60.220上的proxysql賬號沒有生效!! [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603 ............... ............... MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_username'; Query OK, 1 row affected (0.002 sec) MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_password'; Query OK, 1 row affected (0.002 sec) MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.006 sec) MySQL [(none)]> SAVE MYSQL SERVERS TO DISK; Query OK, 0 rows affected (0.418 sec) MySQL [(none)]> select * from global_variables; .......... .......... | mysql-monitor_username | proxysql | | mysql-monitor_password | proxysql | .......... .......... 再次看出172.16.60.220實例節點上的數據狀態,發現就會出現"ONLINE"了 [root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603 ............... ............... MySQL [(none)]> select * from mysql_servers; +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 172.16.60.211 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.213 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.212 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 8 rows in set (0.000 sec)
以上就實現了簡單的ProxySQL Cluster雙節點集羣環境,兩個節點間數據自動同步。最後就能夠結合Keepalived,利用VIP資源漂移來實現ProxySQL雙節點的無感知故障切換,即對外提供一個統一的vip地址,而且在keepalived.conf文件中配置proxysql服務的監控腳本,當宕機或proxysql服務掛掉時就將vip資源漂移到另外一個正常的節點上,從而使proxysql的代理層持續無感應地提供服務。