master-slave的數據庫結構解決了許多問題,特別是讀寫應用
1.寫操做所有在master節點執行,slave每隔60s讀取master的binlog
2.將衆多的用戶請求分散到更多 的節點,從而減輕單點的壓力
缺點:
1.slave實時性不太好。
2.高可用問題,mater就是致命點(SPOF:Single point of failure);
master-master replication
1.使用兩個mysql數據庫db01,db02,互爲master和slave
2.從app來講只有一個做爲master
3.若是扮演slave的數據庫節點db02failed
此時app會將全部的read,write分配給的db01,除非slave活過來了
4.若是扮演slave的數據庫節點db01failed
此時app會將全部的read,write分配給的db02,除非slave活過來了
其中3.4是由mmm來配置的
MMM有三個組件
MMMD_MON->監控那些正在工做的而且指定角色等等的腳本
MMM_AGENT-->遠程服務器管理代理腳本[提供一套service來使server管理跟容易,更有彈性的監控node]
MMM_CONTROL-->可以經過命令和腳原本管理MMMD_MON
每個mysql服務器節點須要運行MMM_AGENT,同時在另外一臺機器(能夠是獨立的一臺機器,也能夠是app server共享一臺機器)
運行mmmd_mon造成1*mmmd_mon +n*mmmd-agent部署架構
MMM利用虛擬ip技術:一個網卡能夠同時使用多個ip
(因此使用mmm時,須要2*n+1個ip)n爲mysql節點個數(包括master和slave節點_個數)
當有數據庫節點fail時mmmd_mon檢測不到mmmmd_agent的心跳或者服務狀態。mmmd_mon將進行決定,並下指令給某個正常的數據庫節點的mmmd_agent,
使得該mmmd_agent篡位使用剛纔fail掉的那個節點的虛擬ip,使得虛擬ip實際指向fail那個機器
mmm對mysql master-slave replicastion有很好的補充
webclient 數據庫請求至proxry -proxy進行讀寫分發-轉至mmm機制,在檢測存貨的機器讀與寫操做
規劃
主機名 ip port app 目錄 備註
Node1 192.168.88.149 3306 mysql 數據庫服務器1
Node2 192.168.88.150 3306 mysql 數據庫服務器2
MON 192.168.88.191 3306 mysql 數據庫管理服務器
PROXY 192.168.88.192 4040 PROXY 數據庫代理NLB
node1,node 2數據庫服務器replication雙向master-master虛擬機有限,只能開四臺,由於node1,node2便可以讀又能夠寫
db1 192.168.88.149
db2 192.168.88.150
MON 192.168.88.191
PROXY 192.168.88.192
配置步驟
網絡的配置修改成靜態ip而且ifcongfig -a 和目錄下的網卡名一致,確保硬件地址一致
node1和node2 replication 雙向master-master
node1 和node2安裝mmm並配置mmm_regent.conf
Mon安裝mmm並配置mmm_mon.conf
proxy安裝mysql-proxy
安裝前準備
1.卸載mysql[mon,node1,node2,mysql-proxy]
[root@localhost ~]#yum remove mysql-libs-5.1.71-1.el6.x86_64
2.關閉防火牆[mon,node1,node2,mysql-proxy]
[root@localhost ~]# service iptables stop
3.安裝mysql[]
[root@localhost ~]# yum install mysql*
yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
Algorithm-Diff-1.1902.tar.gz :http://ipkg.nslu2-linux.org/sources/Algorithm-Diff-1.1902.tar.gz
Proc-Daemon-0.03.tar.gz:ftp://ftp.auckland.ac.nz/pub/perl/CPAN/modules/by-module/Proc/Proc-Daemon-0.03.tar.gz
perl包的安裝過程都是:
perl Makefile.PL
make
make test
make install
yum install cpan
安裝MMM須要的perl模塊,
cpan Proc::Daemon Log::Log4perl Algorithm::Diff DBD::mysql (少此模塊,slave對應的master不會自動切換,主master還會出現漂移不到寫VIP,沒法寫)Net::ARP(少此模塊,會出現沒法漂移VIP,
分配不到虛擬IP),2臺master和2臺slave機啓用agent程序,或把它作成服務直接啓用服務,如還缺乏其它模塊啓動時會有提示,或用/usr/lib/mysql-mmm/agent/configure_ip 虛IP,測試也會有相應的
報錯提示。
一,配置node1 node2數據庫服務器replication雙向master-master注意防火牆
1.配置node1同步
my.cnf
server-id= 1
log_bin =mysql-bin
[root@localhost ~]# /etc/rc.d/init.d/mysqld restart
node1上:
mysql> show master status;#node2
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000006 | 106 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql>grant replication slave on *.* to 'replication'@'%' identified by 'slave';
node1:執行命令
change master to master_host='192.168.88.150',master_user='replication',master_password='slave', master_log_file='mysql-bin.000006',master_log_pos=106;
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.150
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000006
Read_Master_Log_Pos: 106
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000006
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
node2上
2.配置node2同步
my.cnf
server-id= 2
log_bin =mysql-bin
mysql>grant replication slave on *.* to 'replication'@'%' identified by 'slave';
mysql> show master status;#node1
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 550 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
change master to master_host='192.168.88.150',master_user='replication',master_password='slave', master_log_file='mysql-bin.000005',master_log_pos=106;
show slave status\G;結果
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.149
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 106
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
二,安裝部署MMM
目標主機
Node1 192.168.88.149
Node2 192.168.88.150
MON 192.168.88.191
1.安裝mon主機包
一、安裝mon主機軟件包
wget http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
rpm -ivh epel-release-5-4.noarch.rpm
yum -y install mysql-mmm*
三:置MMM監控、代理服務
grant process, super, replication client on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';#第二個帳號mmm_agent(代理帳號),是mmm agent用來變成只讀模式和同步master等
grant replication client on *.* to 'mmm_monitor'@'%' identified by 'mmm_monitor'#第三個帳號mmm_monitor(監聽帳號),是mmm monitor服務器用來對mysql服務器作健康檢查的
#SET PASSWORD FOR 'mmm_agent'@'%' = PASSWORD('');
flush privileges;
1. 在三臺服務器修改mmm_common.conf配置文件(三臺服務器此配置文件內容相同)
active_master_role writer
<host default>
cluster_interface eth1
pid_path /var/run/mmm_agentd.pid
bin_path /usr/lib/mysql-mmm/
replication_user replication
replication_password slave
agent_user mmm_agent
agent_password agent_password
</host>
<host db1>
ip 192.168.88.149
mode master
peer db2
</host>
<host db2>
ip 192.168.88.150
mode master
peer db1
</host>
<role writer>
hosts db1, db2
ips 192.168.88.101
mode exclusive
</role>
<role reader>
hosts db1, db2
ips 192.168.88.102,192.168.88.103
mode balanced
</role>
2. 在Node1服務器上修改mmm_agent.conf配置文件
include mmm_common.conf
this db1
3. 在Node2服務器上修改mmm_agent.conf配置文件
include mmm_common.conf
this db2
4. 在MON服務器上配置mmm_mon.conf配置文件####################mmm_mon.conf
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path /var/lib/misc/mmm_mond.status
ping_ips 192.168.88.149,192.168.88.150
auto_set_online 10 #發現節點丟失則過10秒進行切換
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password monitor_password
</host>
debug 1
修改mon的host
5. 啓動代理(默認是啓用,這裏只是說明下)
[root@MySQL-M1 mysql-mmm]# cat /etc/default/mysql-mmm-agent
# mysql-mmm-agent defaults
ENABLED=1
[root@MySQL-M2 mysql-mmm]# cat /etc/default/mysql-mmm-agent
# mysql-mmm-agent defaults
ENABLED=1
在mon節點
[root@localhost mysql-mmm]# pwd
/etc/mysql-mmm
[root@localhost mysql-mmm]# vi mmm_mon_log.conf
#log4perl.logger = FATAL, MMMLog, MailFatal
# MailFatal爲郵件報警的模塊,FATAL定義了記錄日誌的級別
log4perl.logger = FATAL, MMMLog
log4perl.appender.MMMLog = Log::Log4perl::Appender::File
log4perl.appender.MMMLog.Threshold = INFO
log4perl.appender.MMMLog.filename = /var/log/mysql-mmm/mmm_mond.log
log4perl.appender.MMMLog.recreate = 1
log4perl.appender.MMMLog.layout = PatternLayout
log4perl.appender.MMMLog.layout.ConversionPattern = %d %5p %m%n
#log4perl.appender.MailFatal = Log::Dispatch::Email::MailSender
#log4perl.appender.MailFatal.Threshold = FATAL
#log4perl.appender.MailFatal.from = mmm@example.com
# 指定發件人
#log4perl.appender.MailFatal.to = root,mmm@example.com
# 指定收件人
#log4perl.appender.MailFatal.buffered = 0
# 0爲當即發送
#log4perl.appender.MailFatal.subject = FATAL error in mysql-mmm-monitor
# 定義郵件主題
#log4perl.appender.MailFatal.layout = PatternLayout
#log4perl.appender.MailFatal.layout.ConversionPattern = %d %m%n
root@localhost mysql-mmm]# cd /root/soft/mysql-mmm-2.2.1/etc/mysql-mmm
[root@localhost mysql-mmm]# cp * /etc/mysql-mmm
四啓動相關服務:
在node1和node2上:
[root@localhost mysql-mmm]# vi /etc/default/mysql-mmm-agent
ENABLED=1
[root@localhost init.d]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-agent start
[root@localhost Proc-Daemon-0.03]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-agent restart
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
install Class::Singleton
MySQL-MON服務器上啓動
/root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor start
[root@localhost Proc-Daemon-0.03]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor restart
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
五:測試MMM
/root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/HARD_OFFLINE. Roles: 硬件沒鏈接上, 帳號有問題, 最好用mysql命令測試
db2(192.168.88.150) master/HARD_OFFLINE. Roles:
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/AWAITING_RECOVERY. Roles:
db2(192.168.88.150) master/AWAITING_RECOVERY. Roles:
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control mode
ACTIVE
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control set_online db1激活主機
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!激活主機
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show checks all
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show 查看分配狀況
db1(192.168.88.149) master/ONLINE. Roles: reader(192.168.88.103), writer(192.168.88.101)
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102)
以上證實成功了
如下不成功
db1(192.168.88.149): master/ONLINE. Roles: reader(192.168.1.7;), writer(192.168.1.9;)
db2(192.168.88.150): master/REPLICATION_FAIL. Roles: None
檢測出1.3出了故障.
等一會..進行了切換!由於讀寫是輪循的.這時寫切到了3
# mmm_control show
Servers status:
db1(192.168.88.149): master/ONLINE. Roles: reader(192.168.1.7;)
db2(192.168.88.150): master/ONLINE. Roles: reader(192.168.1.8;), writer(192.168.1.9;)
Telnet 任何一個虛擬IP 3306都是通的
[root@localhost ~]# ps aux |grep mmm
root 12273 0.0 0.1 106064 1396 pts/1 S+ 03:29 0:00 /bin/sh /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor restart
root 12276 0.0 0.1 106068 1452 pts/1 S+ 03:29 0:00 /bin/sh /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor start
root 12278 0.0 1.4 161684 14964 pts/1 S+ 03:29 0:00 mmm_mond
root 12279 0.4 6.9 700808 70624 pts/1 Sl+ 03:29 0:56 mmm_mond
root 12287 0.1 0.9 150700 10064 pts/1 S+ 03:29 0:13 perl /usr/lib/mysql-mmm//monitor/checker ping_ip
root 12290 0.1 1.2 181776 12512 pts/1 S+ 03:29 0:13 perl /usr/lib/mysql-mmm//monitor/checker mysql
root 12292 0.0 0.9 150700 10060 pts/1 S+ 03:29 0:05 perl /usr/lib/mysql-mmm//monitor/checker ping
root 12294 0.1 1.2 181776 12548 pts/1 S+ 03:29 0:15 perl /usr/lib/mysql-mmm//monitor/checker rep_backlog
root 12296 0.1 1.2 181776 12552 pts/1 S+ 03:29 0:15 perl /usr/lib/mysql-mmm//monitor/checker rep_threads
root 13335 0.0 0.0 103244 864 pts/0 S+ 06:48 0:00 grep mmm
node1停機
@localhost mysql-mmm]# service mysqld stop
Stopping mysqld: [ OK ]
db1(192.168.88.149) master/HARD_OFFLINE. Roles: 停掉了
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102), reader(192.168.88.103), writer(192.168.88.101)
[root@localhost mysql-mmm]# service mysqld start從新啓動node1
Starting mysqld: [ OK ]
[root@localhost mysql-mmm]#
localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/AWAITING_RECOVERY. Roles:
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102), reader(192.168.88.103), writer(192.168.88.101)
mmm_control help
Valid commands are:
help - show this message #查看幫助信息
ping - ping monitor #ping監控,用於監控檢測agent服務器
show - show status #查看狀態信息
checks [<host>|all [<check>|all]] - show checks status #顯示檢查狀態,包括(ping、mysql、rep_threads、rep_backlog)
set_online <host> - set host <host> online #設置某host爲online狀態
set_offline <host> - set host <host> offline #設置某host爲offline狀態
mode - print current mode. #打印當前的模式,是ACTIVE、MANUAL、PASSIVE(默認是ACTIVE模式)
set_active - switch into active mode. #更改成active模式
set_manual - switch into manual mode. #更改成manual模式
set_passive - switch into passive mode. #更改成passive模式
move_role [--force] <role> <host> - move exclusive role <role> to host <host> #更改host的模式,好比更改處於slave的mysql數據庫角色爲writer
(Only use --force if you know what you are doing!)
set_ip <ip> <host> - set role with ip <ip> to host <host> help #爲host設置ip,只有passive模式的
5、mysql_proxy與mysql MMM集成的必要性
一、實現mysql數據庫層的負載均衡
二、數據庫節點實現HA動態切換
三、讀寫分離,下降主數據庫負載
MySQL Proxy就是這麼一箇中間層代理,簡單的說,MySQL Proxy就是一個鏈接池,負責將前臺應用的鏈接請求轉發給後臺的數據庫,而且經過使用lua腳本,能夠實現複雜的鏈接控制和過濾,
從而實現讀寫分離和負 載平衡。對於應用來講,MySQL Proxy是徹底透明的,應用則只須要鏈接到MySQL Proxy的監聽端口便可。固然,這樣proxy機器可能成爲單點失效,但徹底可使用多個proxy
機器作爲冗餘,在應用服務器的鏈接池配置中配置到多 個proxy的鏈接參數便可。
6、安裝mysql proxy
一、安裝mysql客戶端
[root@localhost soft]# tar -zxf mysql-5.1.45.tar.gz
[root@localhost soft]# cd mysql-5.1.45
[root@localhost mysql-5.1.45]# yum install ncurses
yum -y install ncurses-devel
[root@localhost mysql-5.1.45]# yum install gcc-c++ -y
[root@localhost mysql-5.1.45]# ./configure --without-server
[root@localhost mysql-5.1.45]# make &&make install
2,安裝LUA
wget http://www.lua.org/ftp/lua-5.1.4.tar.gz
tar zxvf lua-5.1.4.tar.gz
cd lua-5.1.4
vim修改Makefile,使"INSTALL_TOP=/usr/local/lua",這樣作的目的是爲了是lua的全部文件都安裝在目錄/usr/local/lua/
make posix
make install
3,安裝libevent
wget http://monkey.org/~provos/libevent-1.4.13-stable.tar.gz
tar zxvf libevent-1.4.13-stable.tar.gz
cd libevent-1.4.13
./configure --prefix=/usr/local/libevent
make && make install
4,設置mysql-proxy所需的環境變量,把下面的內容追加到/etc/profile
export LUA_CFLAGS="-I/usr/local/lua/include" LUA_LIBS="-L/usr/local/lua/lib -llua -ldl" LDFLAGS="-L/usr/local/libevent/lib -lm"
export CPPFLAGS="-I/usr/local/libevent/include"
export CFLAGS="-I/usr/local/libevent/include"
執行 source /etc/profile
5安裝mysql-proxy
wget http://mysql.cdpa.nsysu.edu.tw/Downloads/MySQL-Proxy/mysql-proxy-0.6.1.tar.gz#此連接404本地上傳
cd mysql-proxy-0.6.1
yum install glib*
./configure --prefix=/usr/local/mysql-proxy --with-mysql --with-lua
make && make install
6啓動mysql-proxy
mon節點
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/ONLINE. Roles: reader(192.168.88.103), writer(192.168.88.101)
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102)
本次對兩臺數據庫實現了讀寫分離;mysql-master爲可讀可寫,mysql-slave爲只讀
#/usr/local/mysql-proxy/sbin/mysql-proxy
--proxy-address=192.168.88.192:4040
--proxy-read-only-backend-addresses=192.168.1.102:3306
--proxy-read-only-backend-addresses=192.168.1.103:3306
--proxy-backend-addresses=192.168.1.101:3306
--proxy-lua-script=/usr/local/share/mysql-proxy/rw-splitting.lua &
注:若是正常狀況下啓動後終端不會有任何提示信息,mysql-proxy啓動後會啓
動兩個端口4040和4041,4040用於SQL轉發,4041用於管理mysql-proxy。若有多個mysql-slave能夠依次在後面添加
oot@localhost ~]# netstat -tlp | grep mysql-proxy
tcp 0 0 *:yo-main *:* LISTEN 23846/mysql-proxy
tcp 0 0 *:houston *:* LISTEN 23846/mysql-proxy
[root@localhost ~]#
mysql> grant all on *.* to 'proxy1'@'%' identified by '123456';
#SET PASSWORD FOR 'proxy1'@'%' = PASSWORD('');
sql> use tt;
Database changed
mysql> create table first_tb(int id,varchar(30));
mysql> insert into first_tb values (7,"first");
insert into first_tb values (8,"second");
Query OK, 1 row affected (0.00 sec)
node1停掉slave
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)
mysqld : ALL : ALLOW
[root@localhost ~]# vi /etc/hosts.allow
mysqld : ALL : ALLOW
mysqld-max : ALL :ALLOW
mysql -uproxy1 -P4040 -h192.168.88.192
--proxy-backend-addresses=192.168.1.9:3306 指定mysql寫主機的端口
--proxy-read-only-backend-addresses=192.168.1.7:3306 指定只讀的mysql主機端口
--proxy-read-only-backend-addresses=192.168.1.8:3306 指定另外一個只讀的mysql主機端口
--proxy-lua-script=/usr/local/share/mysql-proxy/rw-splitting.lua 指定lua腳本,在這裏,使用的是rw-splitting腳本,用於讀寫分離
完整的參數能夠運行如下命令查看:
mysql-proxy --help-all
運行如下命令啓動/中止/重啓mysql proxy:
# /etc/init.d/mysql-proxy start
# /etc/init.d/mysql-proxy stop
# /etc/init.d/mysql-proxy restart
Ps -ef | grep mysql-proxy
7、測試結果
將web server 如apache 中部署的網站,數據庫鏈接地址改成----〉proxy的ip端口爲4040
一、往數據庫db1裏寫入數據,查看2個數據庫同步狀況
二、使用mon服務器mmm_control show 查看狀態
簡單的測試能夠鏈接proxy 4040 查看讀寫狀況
方法我就再也不詳細寫了。
編譯過程有可能會遇到一些錯誤。下面是錯誤的總結:
http://wenku.baidu.com/view/77897ad53186bceb19e8bb55.html?from=search
[url]http://dev.mysql.com/doc/refman/5.0/en/perl-support-problems.html[/url]
http://my.oschina.net/barter/blog/89858html