MHA(Master High Availability),由日本DeNA公司youshimaton(現就任於Facebook公司)開發。MHA能作到在10~30秒以內自動完成數據庫的Failover,Failover的過程當中,能最大程度上保證數據的一致性。
該軟件由兩部分組成:MHA Manager(管理節點)和MHA Node(數據節點)。
半同步複製,能夠大大下降數據丟失的風險。MHA能夠與半同步複製結合起來。
MHA高可用集羣,要求一個複製集羣中須要有三臺數據庫服務器,一主二從,不支持多實例。出於機器成本的考慮,淘寶也在該基礎上進行了改造,淘寶TMHA支持一主一從
注意:必須使用獨立的數據庫節點,不支持多實例。前端
Manager工具包主要包括如下幾個工具:node
Node工具包(這些工具一般由MHA Manager的腳本觸發,無需人爲操做)主要包括如下幾個工具:mysql
一、Manager程序負責監控全部已知Node(1主2從全部節點)
二、當主庫發生意外宕機
2.1 mysql實例故障(SSH可以鏈接到主機)
一、監控到主庫宕機,選擇一個新主(取消從庫角色,reset slave),選擇標準:數據較新的從庫會被選擇爲新主(show slave status\G)
二、從庫經過MHA自帶腳本程序,當即保存缺失部分的binlog
三、二號從庫會從新與新主構建主從關係,繼續提供服務
四、若是VIP機制,將vip從原主庫漂移到新主,讓應用程序無感知
2.2 主節點服務器宕機(SSH已經鏈接不上了)
一、監控到主庫宕機,嘗試SSH鏈接,嘗試失敗
二、選擇一個數據較新的從庫成爲新主庫(取消從庫角色 reset slave),判斷細節:show slave status\G
三、計算從庫之間的relay-log的差別,補償到2號從庫
四、二號從庫會從新與新主構建主從關係,繼續提供服務
五、若是VIP機制,將vip從原主庫漂移到新主,讓應用程序無感知
六、若是有binlog server機制,會繼續講binlog server中的記錄的缺失部分的事務,補償到新的主庫git
relay_log_purge=0 #保留mysql中relay_log 每一個節點都須要作好解析 172.16.1.51 db01 172.16.1.52 db02 172.16.1.53 db03
mha下載地址:https://github.com/yoshinorim/mha4mysql-manager/releases 每一個點都須要進行安裝mha-node yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm
grant all privileges on *.* to mha@'172.16.1.%' identified by 'mha';
ln -s /application/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog ln -s /application/mysql/bin/mysql /usr/bin/mysql
yum -y install mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
mkdir -p /etc/mha mkdir -p /var/log/mha/app1 ----》能夠管理多套主從複製 vim /etc/mha/app1.cnf [server default] manager_log=/var/log/mha/app1/manager manager_workdir=/var/log/mha/app1 master_binlog_dir=/data/binlog user=mha password=mha ping_interval=2 repl_password=123 repl_user=repl ssh_user=root [server1] hostname=172.16.1.51 port=3306 [server2] hostname=172.16.1.52 port=3306 [server3] hostname=172.16.1.53 port=3306
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa >/dev/null 2>&1 ssh-copy-id -i /root/.ssh/id_dsa.pub root@172.16.1.51 ssh-copy-id -i /root/.ssh/id_dsa.pub root@172.16.1.52 ssh-copy-id -i /root/.ssh/id_dsa.pub root@172.16.1.53
[root@db03 tools]# masterha_check_ssh --conf=/etc/mha/app1.cnf Tue Apr 30 20:04:52 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping .Tue Apr 30 20:04:52 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:04:52 2019 - [info] Reading server configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:04:52 2019 - [info] Starting SSH connection tests.. Tue Apr 30 20:04:53 2019 - [debug] Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.51(172.16.1.51:22) to root@172.16. 1.52(172.16.1.52:22)..Tue Apr 30 20:04:52 2019 - [debug] ok. Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.51(172.16.1.51:22) to root@172.16. 1.53(172.16.1.53:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [debug] Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.52(172.16.1.52:22) to root@172.16. 1.51(172.16.1.51:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [debug] Connecting via SSH from root@172.16.1.52(172.16.1.52:22) to root@172.16. 1.53(172.16.1.53:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Tue Apr 30 20:04:53 2019 - [debug] Connecting via SSH from root@172.16.1.53(172.16.1.53:22) to root@172.16. 1.51(172.16.1.51:22)..Warning: Permanently added '172.16.1.53' (ECDSA) to the list of known hosts. Permission denied (publickey,password). Tue Apr 30 20:04:53 2019 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.1.53(172.16.1.53:22) to root@172.16.1.51(172.16.1.51:22) failed!SSH Configuration Check Failed! at /usr/bin/masterha_check_ssh line 44. [root@db03 tools]#
[root@db03 tools]# masterha_check_ssh --conf=/etc/mha/app1.cnf Tue Apr 30 20:04:52 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping .Tue Apr 30 20:04:52 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:04:52 2019 - [info] Reading server configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:04:52 2019 - [info] Starting SSH connection tests.. Tue Apr 30 20:04:53 2019 - [debug] Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.51(172.16.1.51:22) to root@172.16. 1.52(172.16.1.52:22)..Tue Apr 30 20:04:52 2019 - [debug] ok. Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.51(172.16.1.51:22) to root@172.16. 1.53(172.16.1.53:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [debug] Tue Apr 30 20:04:52 2019 - [debug] Connecting via SSH from root@172.16.1.52(172.16.1.52:22) to root@172.16. 1.51(172.16.1.51:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [debug] Connecting via SSH from root@172.16.1.52(172.16.1.52:22) to root@172.16. 1.53(172.16.1.53:22)..Tue Apr 30 20:04:53 2019 - [debug] ok. Tue Apr 30 20:04:53 2019 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Tue Apr 30 20:04:53 2019 - [debug] Connecting via SSH from root@172.16.1.53(172.16.1.53:22) to root@172.16. 1.51(172.16.1.51:22)..Warning: Permanently added '172.16.1.53' (ECDSA) to the list of known hosts. Permission denied (publickey,password). Tue Apr 30 20:04:53 2019 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.1.53(172.16.1.53:22) to root@172.16.1.51(172.16.1.51:22) failed!SSH Configuration Check Failed! at /usr/bin/masterha_check_ssh line 44. [root@db03 tools]# masterha_check_repl --conf=/etc/mha/app1.cnf Tue Apr 30 20:05:38 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping .Tue Apr 30 20:05:38 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:05:38 2019 - [info] Reading server configuration from /etc/mha/app1.cnf.. Tue Apr 30 20:05:38 2019 - [info] MHA::MasterMonitor version 0.58. Tue Apr 30 20:05:40 2019 - [info] GTID failover mode = 1 Tue Apr 30 20:05:40 2019 - [info] Dead Servers: Tue Apr 30 20:05:40 2019 - [info] Alive Servers: Tue Apr 30 20:05:40 2019 - [info] 172.16.1.51(172.16.1.51:3306) Tue Apr 30 20:05:40 2019 - [info] 172.16.1.52(172.16.1.52:3306) Tue Apr 30 20:05:40 2019 - [info] 172.16.1.53(172.16.1.53:3306) Tue Apr 30 20:05:40 2019 - [info] Alive Slaves: Tue Apr 30 20:05:40 2019 - [info] 172.16.1.52(172.16.1.52:3306) Version=5.6.43-log (oldest major version between slaves) log-bin:enabledTue Apr 30 20:05:40 2019 - [info] GTID ON Tue Apr 30 20:05:40 2019 - [info] Replicating from 172.16.1.51(172.16.1.51:3306) Tue Apr 30 20:05:40 2019 - [info] 172.16.1.53(172.16.1.53:3306) Version=5.6.43-log (oldest major version between slaves) log-bin:enabledTue Apr 30 20:05:40 2019 - [info] GTID ON Tue Apr 30 20:05:40 2019 - [info] Replicating from 172.16.1.51(172.16.1.51:3306) Tue Apr 30 20:05:40 2019 - [info] Current Alive Master: 172.16.1.51(172.16.1.51:3306) Tue Apr 30 20:05:40 2019 - [info] Checking slave configurations.. Tue Apr 30 20:05:40 2019 - [info] read_only=1 is not set on slave 172.16.1.52(172.16.1.52:3306). Tue Apr 30 20:05:40 2019 - [info] read_only=1 is not set on slave 172.16.1.53(172.16.1.53:3306). Tue Apr 30 20:05:40 2019 - [info] Checking replication filtering settings.. Tue Apr 30 20:05:40 2019 - [info] binlog_do_db= , binlog_ignore_db= Tue Apr 30 20:05:40 2019 - [info] Replication filtering check ok. Tue Apr 30 20:05:40 2019 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package check ing.Tue Apr 30 20:05:40 2019 - [info] Checking SSH publickey authentication settings on the current master.. Tue Apr 30 20:05:40 2019 - [info] HealthCheck: SSH to 172.16.1.51 is reachable. Tue Apr 30 20:05:40 2019 - [info] 172.16.1.51(172.16.1.51:3306) (current master) +--172.16.1.52(172.16.1.52:3306) +--172.16.1.53(172.16.1.53:3306) Tue Apr 30 20:05:40 2019 - [info] Checking replication health on 172.16.1.52.. Tue Apr 30 20:05:40 2019 - [info] ok. Tue Apr 30 20:05:40 2019 - [info] Checking replication health on 172.16.1.53.. Tue Apr 30 20:05:40 2019 - [info] ok. Tue Apr 30 20:05:40 2019 - [warning] master_ip_failover_script is not defined. Tue Apr 30 20:05:40 2019 - [warning] shutdown_script is not defined. Tue Apr 30 20:05:40 2019 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK. [root@db03 tools]#
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
[root@db01 tools]# /etc/init.d/mysqld stop Shutting down MySQL...... SUCCESS!
[root@db03 ~]# cat /var/log/mha/app1/manager ...... CHANGE MASTER TO MASTER_HOST='172.16.1.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx'; #此處是恢復故障機的命令 ...... ----- Failover Report ----- app1: MySQL Master failover 172.16.1.51(172.16.1.51:3306) to 172.16.1.52(172.16.1.52:3306) succeeded Master 172.16.1.51(172.16.1.51:3306) is down! Check MHA Manager logs at db03:/var/log/mha/app1/manager for details. Started automated(non-interactive) failover. Selected 172.16.1.52(172.16.1.52:3306) as a new master. 172.16.1.52(172.16.1.52:3306): OK: Applying all logs succeeded. 172.16.1.53(172.16.1.53:3306): OK: Slave started, replicating from 172.16.1.52(172.16.1.52:3306) 172.16.1.52(172.16.1.52:3306): Resetting slave info succeeded. Master failover to 172.16.1.52(172.16.1.52:3306) completed successfully. #此處表示已經成功切換到52這一臺機器了
mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.1.52 #已成功切換到db02 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 805 Relay_Log_File: db03-relay-bin.000002 Relay_Log_Pos: 408 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 805 Relay_Log_Space: 611 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 52 Master_UUID: 901b2d37-6af9-11e9-9c6c-000c29481d4a Master_Info_File: /application/mysql/data/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: fac6353b-6a35-11e9-9770-000c29c0e349:1-3 Auto_Position: 1 1 row in set (0.00 sec)
1.在db01上,將db01從新加入到主從複製 [root@db01 tools]# /etc/init.d/mysqld start CHANGE MASTER TO MASTER_HOST='172.16.1.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl',MASTER_PASSWORD='123'; start slave 2.在mha配置文件中從新加入db01 [root@db03 tools]# vim /etc/mha/app1.cnf [server default] ...... [server1] hostname=172.16.1.51 port=3306 ...... 3.啓動MHA了manager程序 masterha_check_ssh --conf=/etc/mha/app1.cnf masterha_check_ssh --conf=/etc/mha/app1.cnf nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
[root@db03 bin]# cat master_ip_failover #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '10.0.0.55/24'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } sub stop_vip() { return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip= ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; [root@db03 bin]# dos2unix /usr/local/bin/master_ip_failover [root@db03 bin]# chmod +x master_ip_failover
[root@db03 bin]# vim /etc/mha/app1.cnf [server default] ...... master_ip_failover_script=/usr/local/bin/master_ip_failover ......
my $vip = '172.16.1.100/24'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down";
masterha_stop --conf=/etc/mha/app1.cnf nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
ifconfig eth1:1 172.16.1.100/24 #如最小化安裝沒有ifconfig 命令則須要使用yum -y install net-tools安裝 [root@db02 tools]# ip a 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:48:1d:54 brd ff:ff:ff:ff:ff:ff inet 172.16.1.52/24 brd 172.16.1.255 scope global eth1 valid_lft forever preferred_lft forever inet 172.16.1.100/24 brd 172.16.1.255 scope global secondary eth1:1 valid_lft forever preferred_lft forever
中止主庫 [root@db02 tools]# /etc/init.d/mysqld stop Shutting down MySQL...... SUCCESS! 此時發現vip已經在db01上了 [root@db01 tools]# ip a 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:c0:e3:53 brd ff:ff:ff:ff:ff:ff inet 172.16.1.51/24 brd 172.16.1.255 scope global eth1 valid_lft forever preferred_lft forever inet 172.16.1.100/24 brd 172.16.1.255 scope global secondary eth1:1 valid_lft forever preferred_lft forever
mysql> CHANGE MASTER TO MASTER_HOST='172.16.1.51', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='re pl',MASTER_PASSWORD='123'; 在mha配置文件中加入 [server2] hostname=172.16.1.52 port=3306 啓動mha
生產環境中通常是找一臺額外的機器,必需要有5.6以上的版本,支持gtid並開啓,咱們直接用的是db03程序員
[root@db03 bin]# vim /etc/mha/app1.cnf [server default] manager_log=/var/log/mha/app1/manager manager_workdir=/var/log/mha/app1 master_binlog_dir=/data/binlog master_ip_failover_script=/usr/local/bin/master_ip_failover password=mha ping_interval=2 repl_password=123 repl_user=repl ssh_user=root user=mha [server1] hostname=172.16.1.51 port=3306 [server2] hostname=172.16.1.52 port=3306 [binlog1] no_master=1 hostname=172.16.1.53 master_binlog_dir=/data/mysql/binlog #提早建立好,這個目錄不能和原有的binlog一致 mkdir -p /data/mysql/binlog chown -R mysql.mysql /data/mysql/*
cd /data/mysql/binlog -----》必須進入到本身建立好的目錄 mysqlbinlog -R --host=172.16.1.51 --user=mha --password=mha --raw --stop-never mysql-bin.000001 &
masterha_stop --conf=/etc/mha/app1.cnf nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
刷新binlog日誌在主庫上 mysql> flush logs; 查看binserver目錄 [root@db03 binlog]# ls mysql-bin.000001 mysql-bin.000002 mysql-bin.000003 mysql-bin.000004
ping_interval=2(在server標籤下設置) manager檢測節點存活的間隔時間,總共會探測4次。 candidate_master=1(在節點標籤下設置) #設置爲候選master,若是設置該參數之後,發生主從切換之後將會將此從庫提高爲主庫,即便這個主庫不是集羣中事件最新的slave,默認狀況下若是一個slave落後master 100M的relay logs的話,MHA將不會選擇該slave做爲一個新的master,由於對於這個slave的恢復須要花費很長時間,經過設置check_repl_delay=0,MHA觸發切換在選擇一個新的master的時候將會忽略複製延時,這個參數對於設置了candidate_master=1的主機很是有用,由於這個候選主在切換的過程當中必定是新的master check_repl_delay=0(節點標籤下設置) #用防止master故障時,切換時slave有延遲,卡在那裏切不過來
Atlas 是由 Qihoo 360公司Web平臺部基礎架構團隊開發維護的一個基於MySQL協議的數據中間層項目。它在MySQL官方推出的MySQL-Proxy 0.8.2版本的基礎上,修改了大量bug,添加了不少功能特性。目前該項目在360公司內部獲得了普遍應用,不少MySQL業務已經接入了Atlas平臺,天天承載的讀寫請求數達幾十億條。
源碼Github: https://github.com/Qihoo360/Atlasgithub
Atlas的功能有:讀寫分離、從庫負載均衡、自動分表、IP過濾、SQL語句黑白名單、DBA可平滑上下線DB、自動摘除宕機的DB
Atlas的使用場景:Atlas是一個位於前端應用與後端MySQL數據庫之間的中間件,它使得應用程序員無需再關心讀寫分離、分表等與MySQL相關的細節,能夠專一於編寫業務邏輯,同時使得DBA的運維工做對前端應用透明,上下線DB前端應用無感知web
下載地址:https://github.com/Qihoo360/Atlas/releases
注意:
一、Atlas只能安裝運行在64位的系統上
二、Centos 5.X安裝 Atlas-XX.el5.x86_64.rpm,Centos 6.X安裝Atlas-XX.el6.x86_64.rpm(通過測試centos7也可使用6的版本)
三、後端mysql版本應大於5.1,建議使用Mysql 5.6以上sql
1.安裝altas rpm -ivh Atlas-2.2.1.el6.x86_64.rpm 2.修改配置文件 cd /usr/local/mysql-proxy/ cp test.cnf test.cnf.bak vim /usr/local/mysql-proxy/conf/test.cnf [mysql-proxy] admin-username = user admin-password = pwd proxy-backend-addresses = 172.16.1.100:3306 # 設置寫入主庫vip的地址 proxy-read-only-backend-addresses = 172.16.1.52:3306,10.0.0.53:3306 # 設置只讀的從庫地址 pwds = repl:3yb5jEku5h4=,mha:O2jBXONX098= # 設置數據庫管理用戶,加密方法:/usr/local/mysql-proxy/bin/encrypt 密碼 daemon = true keepalive = true event-threads = 8 log-level = message log-path = /usr/local/mysql-proxy/log sql-log=ON proxy-address = 0.0.0.0:33060 admin-address = 0.0.0.0:2345 charset=utf8 3.啓動atlas /usr/local/mysql-proxy/bin/mysql-proxyd test start
讀測試 mysql -umha -pmha -h172.16.1.53 -P33060 show variables like 'server_id'; mysql> show variables like 'server_id'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 52 | +---------------+-------+ 1 row in set (0.01 sec) mysql> show variables like 'server_id'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 53 | +---------------+-------+ 1 row in set (0.00 sec) 寫測試 set global read_only=1; # 設置兩個從節點爲只讀模式 mysql -umha -pmha -h172.16.1.53 -P33060 create database db1;
鏈接管理接口: mysql -uuser -ppwd -h127.0.0.1 -P2345 打印幫助: mysql> select * from help; +----------------------------+---------------------------------------------------------+ | command | description | +----------------------------+---------------------------------------------------------+ | SELECT * FROM help | shows this help | | SELECT * FROM backends | lists the backends and their state | | SET OFFLINE $backend_id | offline backend server, $backend_id is backend_ndx's id | | SET ONLINE $backend_id | online backend server, ... | | ADD MASTER $backend | example: "add master 127.0.0.1:3306", ... | | ADD SLAVE $backend | example: "add slave 127.0.0.1:3306", ... | | REMOVE BACKEND $backend_id | example: "remove backend 1", ... | | SELECT * FROM clients | lists the clients | | ADD CLIENT $client | example: "add client 192.168.1.2", ... | | REMOVE CLIENT $client | example: "remove client 192.168.1.2", ... | | SELECT * FROM pwds | lists the pwds | | ADD PWD $pwd | example: "add pwd user:raw_password", ... | | ADD ENPWD $pwd | example: "add enpwd user:encrypted_password", ... | | REMOVE PWD $pwd | example: "remove pwd user", ... | | SAVE CONFIG | save the backends to config file | | SELECT VERSION | display the version of Atlas | +----------------------------+---------------------------------------------------------+ 16 rows in set (0.00 sec) 查看全部節點 mysql> SELECT * FROM backends; +-------------+-------------------+-------+------+ | backend_ndx | address | state | type | +-------------+-------------------+-------+------+ | 1 | 172.16.1.100:3306 | up | rw | | 2 | 172.16.1.52:3306 | up | ro | | 3 | 172.16.1.53:3306 | up | ro | +-------------+-------------------+-------+------+ 3 rows in set (0.00 sec) 動態刪除節點: mysql> REMOVE BACKEND 3; Empty set (0.00 sec) mysql> SELECT * FROM backends; +-------------+-------------------+-------+------+ | backend_ndx | address | state | type | +-------------+-------------------+-------+------+ | 1 | 172.16.1.100:3306 | up | rw | | 2 | 172.16.1.52:3306 | up | ro | +-------------+-------------------+-------+------+ 2 rows in set (0.00 sec) 動態添加節點: mysql> ADD SLAVE 172.16.1.53:3306; Empty set (0.00 sec) mysql> SELECT * FROM backends; +-------------+-------------------+-------+------+ | backend_ndx | address | state | type | +-------------+-------------------+-------+------+ | 1 | 172.16.1.100:3306 | up | rw | | 2 | 172.16.1.52:3306 | up | ro | | 3 | 172.16.1.53:3306 | up | ro | +-------------+-------------------+-------+------+ 3 rows in set (0.00 sec) 將修改保存至配置文件中 SAVE CONFIG;
使用Atlas的分表功能時,首先須要在配置文件test.cnf設置tables參數。
tables參數設置格式:數據庫名.表名.分表字段.子表數量,
好比:
你的數據庫名叫school,表名叫stu,分表字段叫id,總共分爲2張表,那麼就寫school.stu.id.2,若是還有其餘的分表,以逗號分隔便可。
用戶須要手動創建2張子表(stu_0,stu_1,注意子表序號是從0開始的)。全部的子表必須在DB的同一個database裏。
當經過Atlas執行(SELECT、DELETE、UPDATE、INSERT、REPLACE)操做時,Atlas會根據分表結果(id%2=k),定位到相應的子表(stu_k)。例如,執行select * from stu where id=3;,Atlas會自動從stu_1這張子表返回查詢結果。但若是執行SQL語句(select * from stu;)時不帶上id,則會提示執行stu 表不存在。
Atlas暫不支持自動建表和跨庫分表的功能。
Atlas目前支持分表的語句有SELECT、DELETE、UPDATE、INSERT、REPLACE。數據庫
配置文件 vim /usr/local/mysql-proxy/conf/test.cnf tables = school.stu.id.5 重啓atlas (主庫)手工建立,分表後的庫和表,分別爲定義的school 和 stu_0 stu_1 stu_2 stu_3 stu_4 create database school; use school create table stu_0 (id int,name varchar(20)); create table stu_1 (id int,name varchar(20)); create table stu_2 (id int,name varchar(20)); create table stu_3 (id int,name varchar(20)); create table stu_4 (id int,name varchar(20)); 測試: insert into stu values (3,'wang5'); insert into stu values (2,'li4'); insert into stu values (1,'zhang3'); insert into stu values (4,'m6'); insert into stu values (5,'zou7'); commit;
IP過濾:client-ips 該參數用來實現IP過濾功能。 在傳統的開發模式中,應用程序直接鏈接DB,所以DB會對部署應用的機器(好比web服務器)的IP做訪問受權。 在引入中間層後,由於鏈接DB的是Atlas,因此DB改成對部署Atlas的機器的IP做訪問受權,若是任意一臺客戶端均可以鏈接Atlas,就會帶來潛在的風險。 client-ips參數用來控制鏈接Atlas的客戶端的IP,能夠是精確IP,也能夠是IP段,以逗號分隔寫在一行上便可。 如: client-ips=192.168.1.2, 192.168.2,這就表明192.168.1.2這個IP和192.168.2.*這個段的IP能夠鏈接Atlas,其餘IP均不能鏈接。 若是該參數不設置,則任意IP都可鏈接Atlas。 若是設置了client-ips參數,且Atlas前面掛有LVS,則必須設置lvs-ips參數,不然能夠不設置lvs-ips。 SQL語句黑白名單 Atlas會屏蔽不帶where條件的delete和update操做,以及sleep函數。
<wiz_tmp_tag id="wiz-table-range-border" contenteditable="false" style="display: none;">vim