搭建MySQL高可用架構MHA

 搭建MySQL高可用架構MHA v1.0node

MHA簡介 mysql

MHA的主要目的是自動化master故障轉移和slave自動提高爲master,在較短期(通常爲10-30秒)的停機時間,能夠避免複製和一致性問題,不用花錢購買新的服務器,沒有性能上的損失,不復雜的(容易安裝),而且不用改變現有的部署。 sql

MHA還提供了一個在線預約master開關的方式:改變當前正在運行的master安全的轉移到到新的mater,在幾秒鐘內的停機時間(0.5-2秒)(只支持塊寫入操做)。 數據庫

MHA提供瞭如下功能,而且能夠知足許多部署的要求,如高可用性,數據完整性,不中斷master的維護 安全

 

1.         準備測試環境 服務器

Cloud1 (192.168.100.133) Cloud2(192.168.100.216) Cloud3(192.168.100.14) 架構

Cloud2 Cloud3MHA node Cloud1MHA Manager app

 

2.         創建3臺測試機的信任登陸關係 ssh

#cloud1(MHA manger) socket

ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud1

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud2

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud3

#cloud2

 ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud1

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud2

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud3

#cloud3

ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud1

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud2

ssh-copy-id -i ~/.ssh/id_rsa.pub cloud3

 

3.         下載MHA並安裝,mha4mysql-node包在全部機器上都要安裝,mha4mysql-manager包只須要在管理節點上安裝

#MHA node(cloud1 cloud2 cloud3)

rpm -ivh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

yum -y install perl-DBD-MySQL ncftp

wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.53-0.noarch.rpm

rpm -ivh mha4mysql-node-0.53-0.noarch.rpm

 

#MHA manager(cloud1)

yum -y install perl-Config-Tiny perl-Params-Validate\

perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles

wget http://mysql-master-ha.googlecode.com/files/mha4mysql-manager-0.53-0.noarch.rpm

rpm -ivh mha4mysql-manager-0.53-0.noarch.rpm

 

4.          配置MHA 配置文件

#MHA manager(cloud1)

mkdir /etc/masterha

mkdir -p /masterha/app1

vi /etc/masterha/app1.cnf

 

[server default]

user=mhauser

password=mhauser123

manager_workdir=/masterha/app1

manager_log=/masterha/app1/manager.log

remote_workdir=/masterha/app1

ssh_user=root

repl_user=rep

repl_password=rep123

ping_interval=1

 

[server1]

hostname=192.168.100.133

ssh_port=9999

master_binlog_dir=/data

no_master=1

 

[server2]

hostname=192.168.100.216

ssh_port=9999

master_binlog_dir=/data

candidate_master=1

 

[server3]

hostname=192.168.100.14

ssh_port=9999

master_binlog_dir=/data

candidate_master=1

 

 

5.          驗證ssh 信任登陸是否成功

#MHA manager(cloud1)

masterha_check_ssh --conf=/etc/masterha/app1.cnf

 

Tue Jan 15 15:36:38 2013 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Tue Jan 15 15:36:38 2013 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Tue Jan 15 15:36:38 2013 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Tue Jan 15 15:36:38 2013 - [info] Starting SSH connection tests..

….

Tue Jan 15 15:36:39 2013 - [debug]   ok.

Tue Jan 15 15:36:39 2013 - [info] All SSH connection tests passed successfully.

看到successfully.代表ssh 測試成功

 

6.          首先部署3mysql ,創建測試庫和表,並創建起主從複製

#cloud3 master,cloud2slave1,cloud3slave2

#yum 安裝mysql

yum –y install mysql

 

#設置/etc/my.cnf

[client]

port            = 3306

socket          = /var/lib/mysql/mysql.sock

[mysqld]

port            = 3306

socket          = /var/lib/mysql/mysql.sock

datadir         = /data

skip-locking

key_buffer = 16M

max_allowed_packet = 1M

table_cache = 64

sort_buffer_size = 512K

net_buffer_length = 8K

read_buffer_size = 256K

read_rnd_buffer_size = 512K

myisam_sort_buffer_size = 8M

skip-federated

log-bin=mysql-bin

server-id       = 1      #master id=1 ;slave 1 id =2; slave 2 id =3

[mysqldump]

quick

max_allowed_packet = 16M

[mysql]

no-auto-rehash

[isamchk]

key_buffer = 20M

sort_buffer_size = 20M

read_buffer = 2M

write_buffer = 2M

[myisamchk]

key_buffer = 20M

sort_buffer_size = 20M

read_buffer = 2M

write_buffer = 2M

[mysqlhotcopy]

interactive-timeout

 

#創建測試庫dbtest

create database dbtest;

#創建測試表tb1

use dbtest;

 

CREATE TABLE `tb1` (

  `id` int(11) NOT NULL auto_increment,

  `name` varchar(32) default NULL,

   PRIMARY KEY  (`id`)

);

 

#設置的複製權限賬號

GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO 'rep'@'%' IDENTIFIED BY 'rep123';

FLUSH PRIVILEGES;

#創建主從複製關係

reset master;

show master status;

 

change master to

master_host='192.168.100.14',

master_port=3306,

master_user='rep',

master_password='rep123',

master_log_file='mysql-bin.000001',

master_log_pos=98;

 

slave start;

show slave status\G;

show full processlists;

#master插入數據查看binlog是否同步

insert into tb1(name) values ('123');

 

#創建mha使用的賬號並設置權限

grant all on *.* to mhauser@'cloud1' identified by 'mhauser123';

grant all on *.* to mhauser@'cloud2' identified by 'mhauser123';

grant all on *.* to mhauser@'cloud3' identified by 'mhauser123';

FLUSH PRIVILEGES;

 

7.          驗證mysql複製是否成功

#MHA manager(cloud1)

masterha_check_repl --conf=/etc/masterha/app1.cnf

Tue Jan 15 16:15:22 2013 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Tue Jan 15 16:15:22 2013 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Tue Jan 15 16:15:22 2013 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Tue Jan 15 16:15:22 2013 - [info] MHA::MasterMonitor version 0.53.

Tue Jan 15 16:15:22 2013 - [info] Dead Servers:

MySQL Replication Health is OK.

看到is OK,表面mysql複製成功

 

8.          啓動並驗證MHA manager

#MHA manager(cloud1)

#啓動MHA manager

nohup masterha_manager --conf=/etc/masterha/app1.cnf > /tmp/mha_manager.log 2>&1 &

 

#驗證MHA狀態

masterha_check_status --conf=/etc/masterha/app1.cnf

app1 (pid:5605) is running(0:PING_OK), master:192.168.100.14

 

#關閉MHA manager

ps auxf|grep masterha_manager|grep -v grep|awk '{print $2}'|xargs kill

 

9.         測試master宕機後是否自動切換

#MHA node(cloud3)

/etc/init.d/mysqld stop

#MHA manager(cloud1)

tail -f /masterha/app1/manager.log

----- Failover Report -----

app1: MySQL Master failover 192.168.100.14 to 192.168.100.216 succeeded

Master 192.168.100.14 is down!

Check MHA Manager logs at cloud1:/masterha/app1/manager.log for details.

Started automated(non-interactive) failover.

The latest slave 192.168.100.133(192.168.100.133:3306) has all relay logs for recovery.

Selected 192.168.100.216 as a new master.

192.168.100.216: OK: Applying all logs succeeded.

192.168.100.133: This host has the latest relay log events.

Generating relay diff files from the latest slave succeeded.

192.168.100.133: OK: Applying all logs succeeded. Slave started, replicating from 192.168.100.216.

192.168.100.216: Resetting slave info succeeded.

Master failover to 192.168.100.216(192.168.100.216:3306) completed successfully.

看到successfully.代表切換成功

 

10.     恢復原來的主數據庫爲master

#MHA node(cloud3)

#創建到新master (cloud2)的同步關係

change master to

master_host='192.168.100.216',

master_port=3306,

master_user='rep',

master_password='rep123',

master_log_file='mysql-bin.000002',

master_log_pos=98;

 

#MHA manager(cloud1)

ps auxf|grep masterha_manager|grep -v grep|awk '{print $2}'|xargs kill

masterha_master_switch --master_state=alive --conf=/etc/masterha/app1.cnf

It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on 192.168.100.216(192.168.100.216:3306)? (YES/no):yes

Starting master switch from 192.168.100.216(192.168.100.216:3306) to 192.168.100.14(192.168.100.14:3306)? (yes/NO): yes

Wed Jan 16 13:58:10 2013 - [info]  192.168.100.14: Resetting slave info succeeded.

Wed Jan 16 13:58:10 2013 - [info] Switching master to 192.168.100.14(192.168.100.14:3306) completed successfully.

 

#就此最簡單的master 高可用測試成功

相關文章
相關標籤/搜索