塗抹mysql筆記-搭建mysql高可用體系

mysql的高可用體系
<>追求更高穩定性的服務體系
可擴展性:橫向擴展(增長節點)、縱向擴展(增長節點的硬件配置)
高可用性
<>Slave+LVS+Keepalived實現高可用:在從庫部署負載均衡器。
<>安裝配置LVS:至關於負載均衡器。咱們選擇在192.168.1.9主機名爲linux04的服務器上安裝LVS
一、modprobe -l |grep ipvs查看當前操做系統是否存在lpvs模塊。
二、lsmod |grep ip_vs查看是否ip_vs內個模塊是否被加載,若是沒有執行modprobe ip_vs就能夠把ip_vs模塊加載到內核
[root@linux04 ipvsadm-1.26]# lsmod |grep ip_vs
ip_vs 115643 0
libcrc32c 1246 1 ip_vs
ipv6 321422 36 ip_vs,ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
三、建立軟鏈接
ln -s /usr/src/kernels/2.6.32-573.3.1.el6.x86_64/ /usr/src/Linux
四、下載管理工具ipvsadm執行常規的管理操做:http://www.linux-vs.org/software/index.html
wget http://www.linux-vs.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
tar zxvf ipvsadm-1.26.tar.gz
[root@linux04 /]# chmod -R 775 ipvsadm-1.26/
[root@linux04 /]# cd ipvsadm-1.26/
五、編譯安裝
[root@linux04 ipvsadm-1.26]# make
make -C libipvs
make[1]: Entering directory `/soft/ipvsadm-1.26/libipvs'
gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o libipvs.o libipvs.c
gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o ip_vs_nl_policy.o ip_vs_nl_policy.c
ar rv libipvs.a libipvs.o ip_vs_nl_policy.o
ar: creating libipvs.a
a - libipvs.o
a - ip_vs_nl_policy.o
gcc -shared -Wl,-soname,libipvs.so -o libipvs.so libipvs.o ip_vs_nl_policy.o
make[1]: Leaving directory `/soft/ipvsadm-1.26/libipvs'
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o ipvsadm.o ipvsadm.c
ipvsadm.c: In function 'print_largenum':
ipvsadm.c:1383: warning: field width should have type 'int', but argument 2 has type 'size_t'
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o config_stream.o config_stream.c
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o dynamic_array.o dynamic_array.c
gcc -Wall -Wunused -Wstrict-prototypes -g -o ipvsadm ipvsadm.o config_stream.o dynamic_array.o libipvs/libipvs.a -lnl
ipvsadm.o: In function `parse_options':
/soft/ipvsadm-1.26/ipvsadm.c:432: undefined reference to `poptGetContext'
/soft/ipvsadm-1.26/ipvsadm.c:435: undefined reference to `poptGetNextOpt'
/soft/ipvsadm-1.26/ipvsadm.c:660: undefined reference to `poptBadOption'
/soft/ipvsadm-1.26/ipvsadm.c:502: undefined reference to `poptGetNextOpt'
/soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptStrerror'
/soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptBadOption'
/soft/ipvsadm-1.26/ipvsadm.c:670: undefined reference to `poptFreeContext'
/soft/ipvsadm-1.26/ipvsadm.c:677: undefined reference to `poptGetArg'
/soft/ipvsadm-1.26/ipvsadm.c:678: undefined reference to `poptGetArg'
/soft/ipvsadm-1.26/ipvsadm.c:679: undefined reference to `poptGetArg'
/soft/ipvsadm-1.26/ipvsadm.c:690: undefined reference to `poptGetArg'
/soft/ipvsadm-1.26/ipvsadm.c:693: undefined reference to `poptFreeContext'
collect2: ld returned 1 exit status
make: *** [ipvsadm] Error 1
報錯需安裝popt-static,下載popt-static-1.13-7.el6.x86_64.rpm經過rpm命令安裝。
然後從新解壓ipvsadm-1.24.tar.gz從新編譯安裝成功。
[root@linux04 ipvsadm-1.26]# make install
make -C libipvs
make[1]: Entering directory `/ipvsadm-1.26/libipvs'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/ipvsadm-1.26/libipvs'
if [ ! -d /sbin ]; then mkdir -p /sbin; fi
install -m 0755 ipvsadm /sbin
install -m 0755 ipvsadm-save /sbin
install -m 0755 ipvsadm-restore /sbin
[ -d /usr/man/man8 ] || mkdir -p /usr/man/man8
install -m 0644 ipvsadm.8 /usr/man/man8
install -m 0644 ipvsadm-save.8 /usr/man/man8
install -m 0644 ipvsadm-restore.8 /usr/man/man8
[ -d /etc/rc.d/init.d ] || mkdir -p /etc/rc.d/init.d
install -m 0755 ipvsadm.sh /etc/rc.d/init.d/ipvsadm
六、配置LVS
配置vip添加rs
ipvsadm -A -t 192.168.1.10:3306 -s rr ---->192.168.1.10爲vip
ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.7:3306 -g
ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.8:3306 -g
查看LVS虛擬服務配置:
[root@linux04 ipvsadm-1.26]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.10:3306 rr
-> 192.168.1.7:3306 Route 1 0 0
-> 192.168.1.8:3306 Route 1 0 0
將剛剛建立的VIP綁定到LVS所在的服務器網卡上:
ifconfig eth0:0 192.168.1.10
切換到RealServer節點上執行以下命令:
[root@linux02 ~]# /sbin/ifconfig lo:10 192.168.1.10 broadcast 192.168.1.10 netmask 255.255.255.255
ifconfig命令查看IP地址是否綁定成功:
[root@linux02 ~]# ifconfig lo:10
lo:10 Link encap:Local Loopback
inet addr:192.168.1.10 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
禁用lo環回接口中的arp廣播執行以下:
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
一樣在192.168.1.8主機名爲linux03的服務器上執行RealServer節點上執行的操做。
到此lvs配置完畢可讓應用層經過192.168.1.10訪問Slave節點了。
六、測試LVS
執行命令以下:
+---------------+-------+
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 612 |
+---------------+-------+
說明mysql已經具有負載均衡能力了
<>Keepalived的安裝配置
中斷192.168.1.7主機名爲linux02再鏈接mysql測試:
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.1.10' (111)
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.1.10' (111)
[root@recover ~]#
發現沒有作健康檢查和故障轉移那麼Keepalived上場了。
Keepalived三個功能:實現IP地址的漂移、生成IPVS規則、執行健康檢查
一、下載Keepalived www.keepalived.org在安裝有LVS的調度服務器上安裝keepalived即192.168.1.9主機名爲linux04
root用戶下:
tar -zxvf keepalived-1.2.7.tar.gz
chmod -R 775 keepalived-1.2.7/
cd keepalived-1.2.7
./configure --prefix=/keepalived --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/
make
make install
二、root用戶下複製文件到相關路徑以方便調用:
cp /keepalived/sbin/keepalived /usr/sbin/
cp /keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /keepalived/etc/sysconfig/keepalived /etc/sysconfig/
三、配置keepalived
mkdir /etc/keepalived
vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {
jasoname@qq.com
}
notification_email_from jasoname@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1_1
}html

vrrp_instance V1_MYSQL_READ {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 1
authentication{
auth_type PASS
auth_pass 3306
}
virtual_ipaddress {
192.168.1.10
}
}node

virtual_server 192.168.1.10 3306 {
delay_loop 6
lb_algo rr
lb_kind DR
net_mask 255.255.255.0
#persistence_timeout 20
protocol TCP

real_server 192.168.1.7 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
real_server 192.168.1.8 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}mysql

四、啓動keepalived服務,啓動直接清除ipvsadm下的服務:
[root@linux04 ~]# ipvsadm -C
[root@linux04 ~]# service keepalived start
Starting keepalived: [ OK ]
五、查看keepalived生成的IPVS規則:
[root@linux04 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.10:3306 rr
-> 192.168.1.7:3306 Route 1 0 0
-> 192.168.1.8:3306 Route 1 0 0 linux

<>Dual-Master高可用環境
LVS+Keepalived+Mysql Slaves的組合提升了讀的可靠性,可是Master做爲寫仍然是單點怎麼解決這個問題呢?,雖然從數據安全的角度Master不是單點,可是從讀寫分離後寫的角度看,寫成單點了。
接下來咱們配置的是雙向複製:
在無人修改對象的狀況下在原slave節點(linux02)查詢當前二進制文件和讀寫位置:
system@(none)>show master status \G
*************************** 1. row ***************************
File: mysql-bin.000032
Position: 120
切換到原Master節點(linux01),使其從原slave節點指定位置開始讀取二進制文件:
system@5ienet>change master to master_host='192.168.1.7',master_port=3306,master_user='repl',master_password='oralinux',master_log_file='mysql-bin.000032',master_log_pos=120;
Query OK, 0 rows affected, 2 warnings (0.01 sec)sql

system@5ienet>start slave;
Query OK, 0 rows affected (0.01 sec)數據庫

在linux02執行以下命令:
create table 5ienet.t4(id int not null auto_increment,v1 varchar(20),primary key(id));
在linux01查看這個表是否同步過去:
system@5ienet> desc t4;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| v1 | varchar(20) | YES | | NULL | |
+-------+-------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
雙向複製有個隱患就是:兩端都在執行寫操做,而且是寫入同一個對象。舉例來講,mysql數據庫的表對象主鍵一般是自增加的,插入記錄時每每無需指定主鍵值,那麼這種狀況下,若同時在兩個節點分別向一個對象中執行插入,即便明確指定的列值是不一樣的,可是兩邊產生的主鍵也極有可能重複。咱們對此作個模擬:
linux01中止slave線程:stop slave;
system@5ienet>stop slave;
Query OK, 0 rows affected (0.01 sec)
linux02執行插入語句:insert into 5ienet.t4 (v1) values('192.168.1.7');
system@(none)>insert into 5ienet.t4 (v1) values('192.168.1.7');
Query OK, 1 row affected (0.00 sec)
在linux02查詢t4:system@(none)>select * from 5ienet.t4;
+----+-------------+
| id | v1 |
+----+-------------+
| 1 | 192.168.1.7 |
+----+-------------+
1 row in set (0.00 sec)
在linux01查詢select * from 5ienet.t4;
system@5ienet>select * from 5ienet.t4;
Empty set (0.00 sec)
爲空由於本地slave服務沒有啓動,linux02節點執行的操做並未同步過來。此時在linux01上插入一條記錄:
system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.6');
Query OK, 1 row affected (0.00 sec)
啓動linux01的slave服務:
system@5ienet>start slave;
Query OK, 0 rows affected (0.00 sec)
system@5ienet>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.7
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000032
Read_Master_Log_Pos: 537
Relay_Log_File: mysql-relay-bin.000003
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000032
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.7')'
Skip_Counter: 0
Exec_Master_Log_Pos: 277
Relay_Log_Space: 1036
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.7')'
Replicate_Ignore_Server_Ids:
Master_Server_Id: 612
Master_UUID: 2d88ad71-23e0-11e7-8222-080027f93f02
Master_Info_File: /mysql/conf/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 170424 15:07:30
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
結果slave_sql線程中止工做,複製狀態排除錯誤,提示主鍵重複。一樣另外一個節點也報這類錯誤。
system@(none)> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.6
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000024
Read_Master_Log_Pos: 473194323
Relay_Log_File: mysql-relay-bin.000019
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000024
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: '5ienet'. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.6')'
Skip_Counter: 0
Exec_Master_Log_Pos: 473194051
Relay_Log_Space: 728
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: '5ienet'. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.6')'
Replicate_Ignore_Server_Ids:
Master_Server_Id: 611
Master_UUID: 2584299a-2100-11e7-af61-080027196296
Master_Info_File: /mysql/conf/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 170424 15:05:25
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
如理雙向複製sql線程應用錯誤:
一、刪除源端對應的記錄,然後從新執行
二、跳過錯誤:sql_slave_skip_counter用於指定跳過應用最近的n次事件。默認是0
set global sql_slave_skip_counter=1;指定跳過最近的一次事件。每一個節點作一樣的操做。
start slave;數組

linux01:
system@5ienet>set global sql_slave_skip_counter=1;
Query OK, 0 rows affected (0.00 sec)
system@5ienet>start slave;
Query OK, 0 rows affected (0.01 sec)安全

linux02:
system@(none)>set global sql_slave_skip_counter=1;
Query OK, 0 rows affected (0.00 sec)
system@(none)>start slave;
Query OK, 0 rows affected (0.06 sec)
任意節點執行下列語句修復數據:
system@5ienet>delete from 5ienet.t4 where v1 in('192.168.1.6','192.168.1.7');
Query OK, 1 row affected (0.00 sec)bash

system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.6');
Query OK, 1 row affected (0.00 sec)服務器

system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.7');
Query OK, 1 row affected (0.01 sec)
在另外一節點查詢:
system@(none)> select * from 5ienet.t4;
+----+-------------+
| id | v1 |
+----+-------------+
| 2 | 192.168.1.6 |
| 3 | 192.168.1.7 |
+----+-------------+
2 rows in set (0.00 sec)
避免自增列值衝突:應用只鏈接雙主環境的一個節點,或寫入操做只容許在一個節點上執行。
mysql數據庫中的auto_increment列值增加規則由兩個系統變量控制:
auto_increment_increment:指定自增列增加時的遞增值,範圍從1-65535默認是1,也能夠指定爲0 指定該參數值爲0效果等同於指定爲1
auto_increment_offset:指定自增列增加時的偏移量,用偏移量來形容這個參數可能不夠直觀,那麼能夠將值理解爲自增時的初始值。值得範圍及設定規則與auto_increment_increment徹底相同。
這兩個參數組合使用,例如自增值從6開始增加,每次遞增10,則設置參數以下:
set auto_increment_increment=10;
set auto_increment_offset=6;
建立一個表插入數據看自增值:
system@(none)>set auto_increment_increment=10;
Query OK, 0 rows affected (0.00 sec)

system@(none)>set auto_increment_offset=6;
Query OK, 0 rows affected (0.00 sec)

system@(none)>create table 5ienet.autoinc(col int not null auto_increment primary key);
Query OK, 0 rows affected (0.02 sec)

system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0

system@(none)>select * from 5ienet.autoinc;
+-----+
| col |
+-----+
| 6 |
| 16 |
| 26 |
+-----+
3 rows in set (0.00 sec)
將自增值得偏移量改成8
system@(none)>set auto_increment_offset=8;
Query OK, 0 rows affected (0.00 sec)

system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0

system@(none)>select * from 5ienet.autoinc;
+-----+
| col |
+-----+
| 6 |
| 16 |
| 26 |
| 38 |
| 48 |
| 58 |
+-----+
6 rows in set (0.00 sec)
有了自增值和偏移量咱們能夠爲不一樣的mysql實例指定不一樣的自增值規則。對於咱們當前的雙主複製環境,將兩個節點的遞增值爲改成2,將其中一個節點的偏移量改成1,另外一個節點的偏移量改成2,也就是說一個節點生成的自增值爲奇數,另外一個節點始終爲偶數。各節點的自增列生成規則不相同,那麼生成的值就確定不會重複了。
具體修改兩個節點的my.cnf

<>雙主環境IP自動漂移(主備)
對於雙主環境併發出現各類問題,咱們放棄負載均衡可是實現IP地址故障漂移,提升數據庫的高可用性。
在兩個節點安裝配置keepalived(省略)
一、主節點操做:
編輯keepalived.conf

vrrp_script check_run {
script "/keepalived/bin/ka_check_mysql.sh"
interval 10
}

vrrp_instance VPS {
state BACKUP #初始時指定兩臺服務器均爲備份狀態,以免服務重啓時可能形成的震盪(master角色爭奪)
interface eth0
virtual_router_id 34
priority 100 #優先級,另外一節點本參數值可設置的捎小一些。
advert_int 1
nopreempt #不搶佔,只在優先級高的機器上設置便可,優先級低的不設置
authentication {
auth_type PASS
auth_pass 3141
}
virtual_ipaddress {
192.168.1.11
}
track_script {
check_run
}
}

ka_check_mysql.sh

#!/bin/bash
source /mysql/scripts/mysql_env.ini
MYSQL_CMD=/mysql/bin/mysql
CHECK_TIME=3 #check three times
MYSQL_OK=1 #MYSQL_OK values to 1 when Mysql service working fine,else values to 0

function check_mysql_health() {
$MYSQL_CMD -u${MYSQL_USER} -p${MYSQL_PASS} -S /mysql/conf/mysql.sock -e "show status;" > /dev/null 2>&1
if [ $? = 0 ] ;then
MYSQL_OK=1
else
MYSQL_OK=0
fi
return $MYSQL_OK
}

while [ $CHECK_TIME -ne 0 ]
do
let "CHECK_TIME -= 1"
check_mysql_health
if [ $MYSQL_OK = 1 ] ; then
CHECK_TIME=0
exit 0
fi

if [ $MYSQL_OK -eq 0 ] && [ $CHECK_TIME -eq 0 ]
then
/etc/init.d/keepalived stop
exit 1
fi
sleep 1
done

這段腳本用來檢查mysql實例是的可以正常鏈接,若三次嘗試鏈接都沒能成功建立鏈接,則中止本地的keepalived服務,主動觸發vip漂移。
賦予這段腳本執行權限:
chmod +x ka_check_mysql.sh
啓動keepalived:
service keepalived start
keepalived持有的虛擬IP,經過ifconfig差很少,使用ip addr能夠查到:
[root@linux01 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:19:62:96 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.6/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.11/32 scope global eth0
inet6 fe80::a00:27ff:fe19:6296/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:b3:a6:de brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:feb3:a6de/64 scope link
valid_lft forever preferred_lft forever
如今應用層就能夠經過192.168.1.11這個vip來訪問mysql複製環境中的master實例了

二、備節點操做:
配置另外一個Master節點:
安裝keepalived省略。

編輯keepalived.conf

vrrp_script check_run {
script "/keepalived/bin/ka_check_mysql.sh"
interval 10
}

vrrp_instance VPS {
state BACKUP
interface eth0
virtual_router_id 34
priority 90 #此處調低權重。
advert_int 1
nopreempt #不搶佔,只在優先級高的機器上設置便可,優先級低的不設置
authentication {
auth_type PASS
auth_pass 3141
}
virtual_ipaddress {
192.168.1.11
}
track_script {
check_run
}
}

ka_check_mysql.sh從另外一個節點複製一份便可。
service keepalived start啓動keepalived

三、測試高可用:
在客戶端執行以下命令:
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.11 -N -e "select @@hostname"
Warning: Using a password on the command line interface can be insecure.
+---------+
| linux01 |
+---------+
顯示linux01,說明測試正確,由於linux01爲主節點。中止linux01服務測試:
而後在節點2執行ip addr發現vip已經漂移到本節點
[root@linux02 bin]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 192.168.1.10/32 brd 192.168.1.10 scope global lo:10
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:f9:3f:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.7/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.11/32 scope global eth0
inet6 fe80::a00:27ff:fef9:3f02/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:fd:29:66 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fefd:2966/64 scope link
valid_lft forever preferred_lft forever
在客戶端鏈接測試:
[root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.11 -N -e "select @@hostname"
Warning: Using a password on the command line interface can be insecure.
+---------+
| linux02 |
+---------+

<>DRBD爲master節點數據提供更高保障:Distributed Replication Block Device,分佈式的基於塊設備的複製。
DBRD+Pacemaker+Corosync架構。

<>官方正統的Mysql Cluster
管理節點(Management node):前面提到的管理服務,用來管理mysql cluster中其餘節點的節點,它能夠配置數據、開始或中止節點、執行備份任務等。由於是由它來管理其餘節點,所以這個節點必須首先啓動,管理節點經過命令行工具ndb_mgmd啓動。
Date節點(Data node):用來保存cluster中的數據。Data節點的數量一般應該等於副本數量乘以數據分片的數量。副本用來提供對數據的冗餘保護,對於有高可用要求的環境來講,每份數據至少應該擁有2份副本,這樣安全性纔有保障。Data節點經過命令行工具ndbd啓動。
SQL節點(SQL node):用於爲客戶提供讀取Cluster中數據的服務。這個SQL節點你們能夠把它看作是使用NDBCLUSTER引擎的Mysql數據庫服務。(指定了--ndbcluster和--ndb-connectstring參數),這是一個比較特殊的API節點。儘管Mysql Cluster環境中的SQL節點也使用名爲mysqld的應用程序啓動服務,不過要注意它與標準的Mysql發行版中的mysqld仍是有所不一樣,這是一種專用的mysqld進程,與標準發行版並不能通用。此外就算使用的是Cluster專用的mysqld進程,只要它沒有鏈接到Mysql Cluster管理服務端,那也就沒法經過NDB引擎讀寫Cluster中的數據。

Mysql Cluster中所說的節點指的是某類進程,對於運行着多個節點的計算機,在Cluster裏會把它稱之爲主機
Mysql Cluster社區版下載地址:http://dev.mysql.com/downloads/cluster

<>Cluster安裝與配置
管理節點:192.168.1.20
Data節點1:192.168.1.21
Data節點2:192.168.1.22
SQL節點1:192.168.1.21
SQL節點2:192.168.1.22

源碼安裝Cluster root用戶下:
mkdir /mysql/conf
tar -zxvf
cd
cmake . -DCMAKE_INSTALL_PREFIX=/mysql \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci \
-DWITH_NDB_JAVA=OFF \
-DWITH_FEDERATED_STORAGE_ENGINE=1 \
-DWITH_NDBCLUSTER_STORAGE_ENGINE=1 \
-DCOMPILATION_COMMENT='JASON for MySQLCluster' \
-DWITH_READLINE=ON \
-DSYSCONFDIR=/mysql/conf \
-DMYSQL_UNIX_ADDR=/mysql/conf/mysql.sock

make && make install
操做步驟與安裝Mysql Server基本同樣,只是額外指定了兩個新的參數:
WITH_NDB_JAVA:啓用對Java的支持,這個參數從Cluster7.2.9版本開始引入的,默認就是啓用狀態。若是你須要啓用對Java的支持,除了啓用本參數,還須要附加WITH_CLASSPATH參數指定JDK路徑,不過在本套測試環境中各服務器均未安裝JDK由於咱們選擇禁用它
WITH_NDBCLUSTER_STORAGE_ENGINE:支持NDBCLUSTER引擎
chown -R mysql:mysql /mysql
vi /home/mysql/.bash_profile增長export PATH=/mysql/bin:$PATH 這樣mysql用戶下在任意路徑均可以調用Cluster服務相關的命令行工具。
上述操做要在三臺服務器執行。若是服務器的軟硬件環境一致,能夠選擇在一臺服務器上編譯安裝。以後將編譯好的軟件整個目錄打包複製到其餘服務器之間解壓使用。

配置環節mysql用戶下:
一、配置管理節點
mkdir /mysql/mysql-cluster
vi /mysql/mysql-cluster/config.ini
增長下列內容:
[ndbd default]
NoOfReplicas=2 #指定冗餘數量,建議該值不低於2,不然數據就無冗餘保護
DataMemory=200M #指定爲數據分配的內存空間(測試環境,參數值偏小)
IndexMemory=30M #指定爲索引分配的內存空間(測試環境,參數值偏小)

[ndb_mgmd]
#指定管理節點選項
hostname=192.168.1.20
datadir=/mysql/mysql-cluster

[ndbd]
#指定Data節點選項
hostname=192.168.1.21
datadir=/mysql/mysql-cluster/data

[ndbd]
#指定Data節點選項
hostname=192.168.1.22
datadir=/mysql/mysql-cluster/data

[mysqld]
#指定SQL節點選項
hostname=192.168.1.21

[mysqld]
#指定SQL節點選項
hostname=192.168.1.22

二、配置Data、SQL節點,操做須要在192.168.1.21/22中執行,
vi /mysql/conf/my.cnf 添加
[mysqld]
ndbcluster

[mysql_cluster]
ndb-connectstring=192.168.1.20
初始化192.168.1.21/22數據庫
/mysql/scripts/mysql_install_db --datadir=/mysql/data --basedir=/mysql
這裏只定義了一個參數ndb_connectstring,該參數用於指定管理節點的地址,指定ndbcluster和ndb-connectstring兩參數並啓動mysql server以後,集羣不啓動是沒法執行create table或alter table語句的。

mkdir /mysql/mysql-cluster/data
到此配置工做所有完成,啓動mysql cluster:
先啓動管理節點的後臺服務在192.168.1.20下執行:ndb_mgmd -f /mysql/mysql-cluster/config.ini
[mysql@linux05 mysql-cluster]$ ndb_mgmd -f /mysql/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.6.14 ndb-7.3.3

ndb_mgm進入到專用的命令行界面:
[mysql@linux05 mysql-cluster]$ ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm>
show查詢當前集羣狀態
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.1.21)
id=3 (not connected, accepting connect from 192.168.1.22)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 192.168.1.21)
id=5 (not connected, accepting connect from 192.168.1.22)

切換到192.168.1.21/22服務器啓動data節點:ndbd --initial 注意Data節點在第一次啓動時,執行ndbd命令須要附加--initial參數,之後再執行該命令時,就不須要再附加該參數,不然會清空本地數據。
[mysql@linux06 bin]$ ndbd --initial
2017-04-27 13:44:11 [ndbd] INFO -- Angel connected to '192.168.1.20:1186'
2017-04-27 13:44:11 [ndbd] INFO -- Angel allocated nodeid: 2

[mysql@linux07 conf]$ ndbd --initial
2017-04-27 13:44:43 [ndbd] INFO -- Angel connected to '192.168.1.20:1186'
2017-04-27 13:44:43 [ndbd] INFO -- Angel allocated nodeid: 3

切換到192.168.1.21/22服務器啓動SQL節點:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
到管理節點執行show查看
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0, *)
id=3 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

[mysqld(API)] 2 node(s)
id=4 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3)
id=5 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3)
關閉Cluster:SQL節點經過傳統的mysqladmin shutdown便可,Data節點可經過ndb_mgm中的shutdown子命令進行關閉

Cluster應用體驗:
nodeld4>use test;
Database changed
nodeld4>create table n1(id int not null auto_increment primary key,v1 varchar(20)) engine=ndb;
Query OK, 0 rows affected (0.26 sec)
建表時須要經過engine選項指定要建立的是NDB類型表。
nodeld4>insert into n1 values(null,'a');
Query OK, 1 row affected (0.02 sec)
nodeld5>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
+----+------+
1 row in set (0.01 sec)
nodeld5>insert into test.n1 values(null,'b');
Query OK, 1 row affected (0.00 sec)
nodeld4>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
| 2 | b |
+----+------+
2 rows in set (0.00 sec)

關閉nodeld5對於的SQL節點:mysqladmin shutdown
而後在nodeld4節點繼續執行插入:

nodeld4>insert into n1 values(null,'c');
Query OK, 1 row affected (0.00 sec)
啓動nodeld5:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
nodeld5>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
| 2 | b |
| 3 | c |
+----+------+
3 rows in set (0.01 sec)


Cluster環境中的SQL節點也一樣能夠節點LVS提供VIP路由到SQL節點,來提供應用層鏈接的高可用性和負載均衡。
CLuster不經常使用的緣由:Mysql Cluster中要操做的表數據所有都要在內存裏。(數據是持久化在磁盤中的,但要進行讀寫操做的數據必須被加載到內存中,不是傳統數據庫中所謂的最熱數據在內存中,二是全部數據都要在內存中)也就是說全部NDB節點的內存大小,基本就決定了NDBCLUSTER可以承載的數據庫規模。在最新的NDBCLUSTER版本中,非索引列數據能夠保存在磁盤上,不過索引數據必須被加載到內存中。這也是咱們稱之爲內存數據庫的緣由。

<>繼續擴展數據庫服務 該拆分時要拆分 處理策略得想清

相關文章
相關標籤/搜索