本文基於2個角度進行node 1:mysql主從複製,讀寫分離部分mysql 2:RHCS實現mysql-proxy、mysql-master、lvs高可用linux |
架構圖web
可能會用到的yum源sql
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm數據庫
http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpmbash
系統環境:服務器
CentOS release 6.4 (Final) 2.6.32-358.el6.x86_64 |
地址分配 |
mysql-proxy-fip:10.0.0.100多線程 mysql-master-fip:10.0.0.101架構 lvs-vip:10.0.0.15 ha1:10.0.0.11 ha2:10.0.0.12 ha3:10.0.0.13 ha4:10.0.0.14 real1:10.0.0.15 real2:10.0.0.16 |
背景:
在數據庫服務器很是繁忙的狀況下,實現mysql讀離分離是擴展性能的一個不錯的方案,理由爲如下幾點 1:考慮在mysql雙主中,很難保證數據一致性(mysql-mmm能夠實現,但有待生產環境的考驗),並且並不能提升寫性能 2:在從服務器中,使用lvs爲從進行負載分攤,在mysql-proxy上將讀操做都指向lvs的集羣地址上,能大大提升讀性能 3:在這中場景中,有3個單點故障,mysql-proxy,mysql主服務器,和從服務器集羣的lvs負載均衡器,這裏RHCS的高可用方案解決單點故障 a:使用drbd鏡像解決mysql-master的單點故障 b:使用keepalived解決lvs的單點故障 c:使用RHCS自帶的L實現mysql-proxy故障的自動切換 此套方案中,將mysql-proxy 、mysql-master、lvs分別以ha4節點做爲各自的故障轉移域,出現服務中斷後自動轉移到備份主機上。 |
具體部署
安裝mysql-5.6.10-linux-glibc2.5-x86_64.tar.gz
在mysql-5.6版本引進了Gtid的機制,使得其複製功能的配置、監控及管理變得更加易於實現,且更加健壯。 簡單說明Gtid的功能,主要實現瞭如下2個做用 1:從服務器到主服務器複製數據後,應用數據的時候能夠啓用多線程,加快複製的速度,下降主從之間的延遲。 2:經過全局事務 ID,能夠自動識別上次同步的點,能夠很是簡單的追蹤比較複製事務信息,可以實現主服務器down掉後快速恢復,甚至自動講從服務器提高爲主服務器並能保持各從服務器數據一致,能夠實現mysql複製自身的ha(此文的ha基於RHCS而不使用Gtid的方式)。 |
配置DRBD
首先安裝drbd,使用elrepo提供的yum擴展包,直接yum安裝drbd
rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
在ha2,ha4上安裝drbd工具包和drbd的內核模塊
yum install drbd84-utils kmod-drbd84 –y
drbd全局配置
[root@ha4 ~]# grep '[[:space:]]*#.*$' -v /etc/drbd.d/global_common.conf global { usage-countno; } common { handlers{ pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; local-io-error"/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh;echo o > /proc/sysrq-trigger ; halt -f"; } startup{ } options{ } disk { on-io-errordetach; } net { cram-hmac-alg"sha1"; shared-secret"lust" } syncer{ rate1000M; } } |
drbd資源配置,使用雙主模型,由RHCS控制哪一個節點掛載
[root@ha4 ~]# cat /etc/drbd.d/mysql.res resource mysql { net{ protocol C; allow-two-primaries yes; } startup{ become-primary-on both; } disk { fencing resource-and-stonith; } handlers { # Make sure the other node is confirmed # dead after this! outdate-peer "/sbin/kill-other-node.sh"; } on ha2 { device /dev/drbd0; disk /dev/sda5; address 10.0.0.12:7789; meta-diskinternal; } on ha4 { device /dev/drbd0; disk /dev/sda5; address 10.0.0.14:7789; meta-diskinternal; } } |
將配置文件複製一份到ha2,保持相同便可
ha2,ha4上分別依次執行
初始化資源
drbdadm create-md web
啓動服務
service drbd start
格式化磁盤
mkfs.ext4 /dev/drbd0
安裝好drbd後安裝mysql數據庫,並初始化mysql的時候將datadir目錄指向將要掛載drbd的目錄
先掛載drbd設備
mkdir /mysql/data
mount /dev/drbd0 /mysql/data/
配置mysql複製
安裝mysql,具體安裝過程再也不列出,這裏列出2個注意點
注意點1:
在ha4測試安裝mysql的時候,只能在一個節點上掛載drbd設備,不然會引發文件系統崩潰
注意點2:
必定要將mysql設置爲開機不啓動,讓RHCS來管理mysql的start和stop,這對於配置其它節點的資源也是同樣的,這點下面將再也不說明
master:[mysqld]段的配置 (ha2和ha4保持一致,須要改動的部分看配置的中註釋)
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES binlog-format=ROW log-bin=master-bin log-slave-updates=true gtid-mode=on enforce-gtid-consistency=true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 slave-parallel-workers=2 binlog-checksum=CRC32 master-verify-checksum=1 slave-sql-verify-checksum=1 binlog-rows-query-log_events=1 server-id=1 report-port=3306 port=3306 datadir=/mysql/data socket=/tmp/mysql.sock report-host=ha2 |
real1:[mysqld]的配置
binlog-format=ROW log-slave-updates=true gtid-mode=on enforce-gtid-consistency=true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 slave-parallel-workers=2 binlog-checksum=CRC32 master-verify-checksum=1 slave-sql-verify-checksum=1 binlog-rows-query-log_events=1 server-id=11 #在real2上配置爲12 report-port=3306 port=3306 log-bin=mysql-bin.log datadir=/mysql/data socket=/tmp/mysql.sock report-host=slave1 #在real2上配置爲slave2 |
在master上建立複製用戶
mysql> GRANT REPLICATION SLAVEON *.* TO "repluser"@"10.0.0.%" IDENTIFIED BY '123456';
在slave上指定master
mysql>CHANGE MASTER TOMASTER_HOST='10.0.0.101', MASTER_USER='repluser', MASTER_PASSWORD='123456',MASTER_AUTO_POSITION=1;
啓動複製進程
mysql>START SLAVE;
查看複製狀態
配置mysql-proxy
先安裝EPEL源,此源包含mysql-proxy的rpm包
rpm –ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
安裝mysql-proxy
yum install mysql-proxy –y
配置mysql-proxy
/etc/init.d/mysql-proxy啓動腳本
#!/bin/bash # # mysql-proxy This script starts and stopsthe mysql-proxy daemon # # chkconfig: - 78 30 # processname: mysql-proxy # description: mysql-proxy is a proxydaemon for mysql # Source function library. . /etc/rc.d/init.d/functions prog="/usr/bin/mysql-proxy" # Source networking configuration. if [ -f /etc/sysconfig/network ]; then ./etc/sysconfig/network fi # Check that networking is up. [ ${NETWORKING} = "no" ]&& exit 0 # Set default mysql-proxy configuration. ADMIN_USER="admin" ADMIN_PASSWD="admin" ADMIN_LUA_SCRIPT="/usr/lib64/mysql-proxy/lua/admin.lua" PROXY_OPTIONS="--daemon" PROXY_PID=/var/run/mysql-proxy.pid PROXY_USER="mysql-proxy" # Source mysql-proxy configuration. if [ -f /etc/sysconfig/mysql-proxy ]; then ./etc/sysconfig/mysql-proxy fi RETVAL=0 start() { echo -n $"Starting $prog: " daemon $prog $PROXY_OPTIONS --pid-file=$PROXY_PID--proxy-address="$PROXY_ADDRESS" --user=$PROXY_USER--admin-username="$ADMIN_USER"--admin-lua-script="$ADMIN_LUA_SCRIPT"--admin-password="$ADMIN_PASSWORD" RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/mysql-proxy fi } stop() { echo -n $"Stopping $prog: " killproc -p $PROXY_PID -d 3 $prog RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f /var/lock/subsys/mysql-proxy rm -f $PROXY_PID fi } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart) stop start ;; condrestart|try-restart) if status -p $PROXY_PIDFILE $prog >&/dev/null; then stop start fi ;; status) status -p $PROXY_PID $prog ;; *) echo "Usage: $0{start|stop|restart|reload|status|condrestart|try-restart}" RETVAL=1 ;; esac exit $RETVAL
配置參數
cat /etc/sysconfig/mysql-proxy ADMIN_USER="admin" ADMIN_PASSWORD="admin" ADMIN_ADDRESS="" ADMIN_LUA_SCRIPT="/usr/lib64/mysql-proxy/lua/admin.lua" PROXY_ADDRESS="" PROXY_USER="mysql-proxy" PROXY_OPTIONS="--daemon --log-level=info--log-use-syslog --plugins=proxy --plugins=admin--proxy-backend-addresses=10.0.0.101:3306--proxy-read-only-backend-addresses=10.0.0.15:3306--proxy-lua-script=/usr/lib64/mysql-proxy/lua/proxy/balance.lua" |
在ha4上也用相同的方式部署mysql-proxy
LVS的安裝
LVS對mysql從服務器進行讀操做的負載均衡,對於lvs的高可用,因爲keepalived比較輕量級,因此直接使用keepalived來實現,這裏keepalived只是爲了方便經過啓動keepalived應用ipvs規則,因此並直接配置一個keepalived主,而後用RHCS實現keepalived的高可用,而不是keepalived自身的vrrp實現高可用 |
安裝keepalived
yum install keepalived –y
配置文件
cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from root smtp_server127.0.0.1 smtp_connect_timeout 30 router_idLVS_DEVEL } vrrp_instance VI_1 { stateMASTER interfaceeth0 virtual_router_id 51 priority100 advert_int1 authentication{ auth_type PASS auth_pass lust } virtual_ipaddress { 10.0.0.15 } } virtual_server 10.0.0.15 443 { delay_loop6 lb_algo wlc lb_kind DR nat_mask255.255.255.0 persistence_timeout 50 protocolTCP real_server10.0.0.21 3306 { weight1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server10.0.0.22 3306 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } } |
在每一個mysql-slave的real服務器建立tc/init.d/rs,內容以下
#!/bin/bash # # Script to start LVS DR real server. # description: LVS DR real server # . /etc/rc.d/init.d/functions VIP=10.0.0.15 host=`/bin/hostname` case "$1" in start) # StartLVS-DR real server on this machine. /sbin/ifconfig lo down /sbin/ifconfig lo up echo 1> /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce echo 1> /proc/sys/net/ipv4/conf/all/arp_ignore echo 2> /proc/sys/net/ipv4/conf/all/arp_announce /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up /sbin/route add -host $VIP dev lo:0 ;; stop) # StopLVS-DR real server loopback device(s). /sbin/ifconfig lo:0 down echo 0> /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0> /proc/sys/net/ipv4/conf/lo/arp_announce echo 0> /proc/sys/net/ipv4/conf/all/arp_ignore echo 0> /proc/sys/net/ipv4/conf/all/arp_announce ;; status) #Status of LVS-DR real server. islothere=`/sbin/ifconfig lo:0 | grep $VIP` isrothere=`netstat -rn | grep "lo:0" | grep $VIP` if [ !"$islothere" -o ! "isrothere" ];then #Either the route or the lo:0 device #not found. echo "LVS-DR real server Stopped." else echo "LVS-DR real server Running." fi ;; *) #Invalid entry. echo "$0: Usage: $0 {start|status|stop}" exit 1 ;; esac
提供執行權限
chmod +x /etc/init.d/rs
在2臺realserver中啓動rs
service rs start
至此全部關於RHCS集羣的前期工做都準備完畢,接下來開始配置基於RHCS的高可用
在RedHat最新版本的RHCS集羣套件中,採用了corosync爲底層信息的傳遞,但cman的機制依然存在,cman如今是以corosync的插件運行,因此service cman start也是能夠啓動的 |
編輯ha1-ha4的/etc/hosts文件
ha4主機同時做爲luci的跳板機,在ha4上安裝luci
yum install luci -y
在ha1,ha2,ha3,ha4三臺主機上分別執行
yum install ricci –y
爲各節點建立密碼
echo 123456 | passwd ricci --stdin
在ha4上啓動luci
service luci start
在ha1-4節點上啓動ricci
service ricci start
打開luci的管理頁面建立mysql集羣(賬號爲系統root賬號)
將各個節點加入一個新的集羣
建立3個故障轉移域(注意點:在故障轉移域中,prioritized爲1的優先級最高,其次優先級數值越高,優先級越高)
添加2個ip資源,分別爲10.0.0.100和10.0.0.101,lvs的ip由keepalived進行自我控制,這裏提供一個樣例
建立掛載資源
建立mysqld資源
建立mysql-proxy資源
建立lvs資源
建立3個service group
mysql-proxy-ha 包含的資源 ip地址:10.0.0.11 mysql-proxy服務 |
mysql-m-ha包含的資源 ip地址10.0.0.12 drbd-mount mysql |
mysql-lvs-ha包含的資源 lvs-keepalived |
至此配置完成