Nginx+keepalived雙機熱備(主從模式)

負載均衡技術對於一個網站尤爲是大型網站的web服務器集羣來講是相當重要的!作好負載均衡架構,能夠實現故障轉移和高可用環境,避免單點故障,保證網站健康持續運行。
關於負載均衡介紹,能夠參考:linux負載均衡總結性說明(四層負載/七層負載)javascript

因爲業務擴展,網站的訪問量不斷加大,負載愈來愈高。現須要在web前端放置nginx負載均衡,同時結合keepalived對前端nginx實現HA高可用。
1)nginx進程基於Master+Slave(worker)多進程模型,自身具備很是穩定的子進程管理功能。在Master進程分配模式下,Master進程永遠不進行業務處理,只是進行任務分發,從而達到Master進程的存活高可靠性,Slave(worker)進程全部的業務信號都 由主進程發出,Slave(worker)進程全部的超時任務都會被Master停止,屬於非阻塞式任務模型。
2)Keepalived是Linux下面實現VRRP備份路由的高可靠性運行件。基於Keepalived設計的服務模式可以真正作到主服務器和備份服務器故障時IP瞬間無縫交接。兩者結合,能夠構架出比較穩定的軟件LB方案。
php

Heartbeat、Corosync、Keepalived這三個集羣組件咱們到底選哪一個好呢?
首先要說明的是,Heartbeat、Corosync是屬於同一類型,Keepalived與Heartbeat、Corosync,根本不是同一類型的。
Keepalived使用的vrrp協議方式,虛擬路由冗餘協議 (Virtual Router Redundancy Protocol,簡稱VRRP);
Heartbeat或Corosync是基於主機或網絡服務的高可用方式;
簡單的說就是,Keepalived的目的是模擬路由器的高可用,Heartbeat或Corosync的目的是實現Service的高可用。
因此通常Keepalived是實現前端高可用,經常使用的前端高可用的組合有,就是咱們常見的LVS+Keepalived、Nginx+Keepalived、HAproxy+Keepalived。而Heartbeat或Corosync是實現服務的高可用,常見的組合有Heartbeat v3(Corosync)+Pacemaker+NFS+Httpd 實現Web服務器的高可用、Heartbeat v3(Corosync)+Pacemaker+NFS+MySQL 實現MySQL服務器的高可用。總結一下,Keepalived中實現輕量級的高可用,通常用於前端高可用,且不須要共享存儲,通常經常使用於兩個節點的高可用。而Heartbeat(或Corosync)通常用於服務的高可用,且須要共享存儲,通常用於多節點的高可用。這個問題咱們說明白了。
css

那heartbaet與corosync又應該選擇哪一個好?
通常用corosync,由於corosync的運行機制更優於heartbeat,就連從heartbeat分離出來的pacemaker都說在之後的開發當中更傾向於corosync,因此如今corosync+pacemaker是最佳組合。
html

雙機高可用通常是經過虛擬IP(飄移IP)方法來實現的,基於Linux/Unix的IP別名技術。
雙機高可用方法目前分爲兩種:
//負載均衡器上配置的域名都解析到這個VIP上
前端

應用環境以下:java

http://nginx.org/download/nginx-1.9.7.tar.gz
[root@master-node src]# wget http://www.keepalived.org/software/keepalived-1.3.2.tar.gz
安裝nginx
[root@master-node src]# tar -zvxf nginx-1.9.7.tar.gz
[root@master-node src]# cd nginx-1.9.7
添加www用戶,其中-M參數表示不添加用戶家目錄,-s參數表示指定shell類型
[root@master-node nginx-1.9.7]# useradd www -M -s /sbin/nologin
[root@master-node nginx-1.9.7]# vim auto/cc/gcc
#將這句註釋掉 取消Debug編譯模式 大概在179行
#CFLAGS="$CFLAGS -g"
[root@master-node nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre
[root@master-node nginx-1.9.7]# make && make install
安裝keepalived
[root@master-node src]# tar -zvxf keepalived-1.3.2.tar.gz
[root@master-node src]# cd keepalived-1.3.2
[root@master-node keepalived-1.3.2]# ./configure
[root@master-node keepalived-1.3.2]# make && make install
[root@master-node keepalived-1.3.2]# cp /usr/local/src/keepalived-1.3.2/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@master-node keepalived-1.3.2]# cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
[root@master-node keepalived-1.3.2]# mkdir /etc/keepalived
[root@master-node keepalived-1.3.2]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
[root@master-node keepalived-1.3.2]# cp /usr/local/sbin/keepalived /usr/sbin/
將nginx和keepalive服務加入開機啓動服務
[root@master-node keepalived-1.3.2]# echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.local
[root@master-node keepalived-1.3.2]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
node

103.110.98.14 netmask 255.255.255.192 broadcast 103.110.98.63
inet6 fe80::46a8:42ff:fe17:3ddd prefixlen 64 scopeid 0x20<link>
ether 44:a8:42:17:3d:dd txqueuelen 1000 (Ethernet)
RX packets 133787818 bytes 14858530059 (13.8 GiB)
RX errors 0 dropped 644 overruns 0 frame 0
TX packets 2291619 bytes 426619870 (406.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16
.......
linux

接着開始綁定VIP(這一步其實能夠不用這麼直接在外部使用ifconfig綁定。Nginx或Haproxy+Keepalived的七層負載均衡的高可用環境中,VIP就直接在 Keepalived 的配置文件裏配置就好,使用命令 ip addr 就能看出 vip;而LVS+Keepalived 四層負載均衡的高可用環境中, vip是要在外面單獨設置的(即ifconfig eth0:0 ....的方式建立vip),經過 ifconfig 能夠查看出來 vip。)nginx

[root@master-node ~]# ifconfig em1:0 103.110.98.20 broadcast 103.110.98.63 netmask 255.255.255.192 up
[root@master-node ~]# route add -host 103.110.98.20 dev em1:0
web

查看VIP是否成功綁定了:
[root@master-node ~]# ifconfig
em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 103.110.98.14 netmask 255.255.255.192 broadcast 103.110.98.63
inet6 fe80::46a8:42ff:fe17:3ddd prefixlen 64 scopeid 0x20<link>
ether 44:a8:42:17:3d:dd txqueuelen 1000 (Ethernet)
RX packets 133789569 bytes 14858744709 (13.8 GiB)
RX errors 0 dropped 644 overruns 0 frame 0
TX packets 2291620 bytes 426619960 (406.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16

em1:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 103.110.98.20 netmask 255.255.255.192 broadcast 103.110.98.63
ether 44:a8:42:17:3d:dd txqueuelen 1000 (Ethernet)
device interrupt 16
.......

[root@master-node ~]# ping 103.110.98.20
PING 103.110.98.20 (103.110.98.20) 56(84) bytes of data.
64 bytes from 103.110.98.20: icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from 103.110.98.20: icmp_seq=2 ttl=64 time=0.036 ms

[root@master-node ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 103.10.86.1 0.0.0.0 UG 100 0 0 em1
103.10.86.0 0.0.0.0 255.255.255.192 U 100 0 0 em1
103.110.98.20 0.0.0.0 255.255.255.255 UH 0 0 0 em1
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 em2

setenforce 0                               #使配置當即生效

[root@master-node ~]# vim /etc/sysconfig/iptables
.......
-A INPUT -s 103.110.98.0/24 -d 224.0.0.18 -j ACCEPT                        #容許組播地址通訊
-A INPUT -s 192.168.1.0/24 -d 224.0.0.18 -j ACCEPT
-A INPUT -s 103.110.98.0/24 -p vrrp -j ACCEPT                                  #容許 VRRP(虛擬路由器冗餘協)通訊
-A INPUT -s 192.168.1.0/24 -p vrrp -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT      #開通80端口訪問

[root@master-node ~]# /etc/init.d/iptables restart                               #重啓防火牆使配置生效

master-node和slave-node兩臺服務器的nginx的配置徹底同樣,主要是配置/usr/local/nginx/conf/nginx.conf的http,固然也能夠配置vhost虛擬主機目錄,而後配置vhost下的好比LB.conf文件。
其中:
多域名指向是經過虛擬主機(配置http下面的server)實現;
同一域名的不一樣虛擬目錄經過每一個server下面的不一樣location實現;
到後端的服務器在vhost/LB.conf下面配置upstream,而後在server或location中經過proxy_pass引用。
要實現前面規劃的接入方式,LB.conf的配置以下(添加proxy_cache_path和proxy_temp_path這兩行,表示打開nginx的緩存功能):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[root@master-node ~] # vim /usr/local/nginx/conf/nginx.conf
user  www;
worker_processes  8;
 
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
 
events {
     worker_connections  65535;
}
 
 
http {
     include       mime.types;
     default_type  application /octet-stream ;
     charset utf-8;
       
     ######
     ## set access log format
     ######
     log_format  main  '$http_x_forwarded_for $remote_addr $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_cookie" $host $request_time' ;
 
     #######
     ## http setting
     #######
     sendfile       on;
     tcp_nopush     on;
     tcp_nodelay    on;
     keepalive_timeout  65;
     proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;
     proxy_temp_path /var/www/cache/tmp ;
 
     fastcgi_connect_timeout 3000;
     fastcgi_send_timeout 3000;
     fastcgi_read_timeout 3000;
     fastcgi_buffer_size 256k;
     fastcgi_buffers 8 256k;
     fastcgi_busy_buffers_size 256k;
     fastcgi_temp_file_write_size 256k;
     fastcgi_intercept_errors on;
 
     #
     client_header_timeout 600s;
     client_body_timeout 600s;
    # client_max_body_size 50m;
     client_max_body_size 100m;               #容許客戶端請求的最大單個文件字節數
     client_body_buffer_size 256k;            #緩衝區代理緩衝請求的最大字節數,能夠理解爲先保存到本地再傳給用戶
 
     gzip  on;
     gzip_min_length  1k;
     gzip_buffers     4 16k;
     gzip_http_version 1.1;
     gzip_comp_level 9;
     gzip_types       text /plain application /x-javascript text /css application /xml text /javascript application /x-httpd-php ;
     gzip_vary on;
 
     ## includes vhosts
     include vhosts/*.conf;
}

[root@master-node ~]# mkdir /usr/local/nginx/conf/vhosts
[root@master-node ~]# mkdir /var/www/cache
[root@master-node ~]# ulimit 65535

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
[root@master-node ~] # vim /usr/local/nginx/conf/vhosts/LB.conf
upstream LB-WWW {
       ip_hash;
       server 192.168.1.101:80 max_fails=3 fail_timeout=30s;     #max_fails = 3 爲容許失敗的次數,默認值爲1
       server 192.168.1.102:80 max_fails=3 fail_timeout=30s;     #fail_timeout = 30s 當max_fails次失敗後,暫停將請求分發到該後端服務器的時間
       server 192.168.1.118:80 max_fails=3 fail_timeout=30s;
     }
    
upstream LB-OA {
       ip_hash;
       server 192.168.1.101:8080 max_fails=3 fail_timeout=30s;
       server 192.168.1.102:8080 max_fails=3 fail_timeout=30s;
}
          
   server {
       listen      80;
       server_name dev.wangshibo.com;
    
       access_log  /usr/local/nginx/logs/dev-access .log main;
       error_log  /usr/local/nginx/logs/dev-error .log;
    
       location /svn {
          proxy_pass http: //192 .168.1.108 /svn/ ;
          proxy_redirect off ;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header REMOTE-HOST $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_connect_timeout 300;             #跟後端服務器鏈接超時時間,發起握手等候響應時間
          proxy_send_timeout 300;                #後端服務器回傳時間,就是在規定時間內後端服務器必須傳完全部數據
          proxy_read_timeout 600;                #鏈接成功後等待後端服務器的響應時間,已經進入後端的排隊之中等候處理
          proxy_buffer_size 256k;                #代理請求緩衝區,會保存用戶的頭信息以供nginx進行處理
          proxy_buffers 4 256k;                  #同上,告訴nginx保存單個用幾個buffer最大用多少空間
          proxy_busy_buffers_size 256k;          #若是系統很忙時候能夠申請最大的proxy_buffers
          proxy_temp_file_write_size 256k;       #proxy緩存臨時文件的大小
          proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
          proxy_max_temp_file_size 128m;
          proxy_cache mycache;                                
          proxy_cache_valid 200 302 60m;                      
          proxy_cache_valid 404 1m;
        }
    
       location /submin {
          proxy_pass http: //192 .168.1.108 /submin/ ;
          proxy_redirect off ;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header REMOTE-HOST $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_connect_timeout 300;
          proxy_send_timeout 300;
          proxy_read_timeout 600;
          proxy_buffer_size 256k;
          proxy_buffers 4 256k;
          proxy_busy_buffers_size 256k;
          proxy_temp_file_write_size 256k;
          proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
          proxy_max_temp_file_size 128m;
          proxy_cache mycache;        
          proxy_cache_valid 200 302 60m;
          proxy_cache_valid 404 1m;
         }
     }
    
server {
      listen       80;
      server_name  www.wangshibo.com;
  
       access_log  /usr/local/nginx/logs/www-access .log main;
       error_log  /usr/local/nginx/logs/www-error .log;
  
      location / {
          proxy_pass http: //LB-WWW ;
          proxy_redirect off ;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header REMOTE-HOST $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_connect_timeout 300;
          proxy_send_timeout 300;
          proxy_read_timeout 600;
          proxy_buffer_size 256k;
          proxy_buffers 4 256k;
          proxy_busy_buffers_size 256k;
          proxy_temp_file_write_size 256k;
          proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
          proxy_max_temp_file_size 128m;
          proxy_cache mycache;                                
          proxy_cache_valid 200 302 60m;                      
          proxy_cache_valid 404 1m;
         }
}
   
  server {
        listen       80;
        server_name  oa.wangshibo.com;
  
       access_log  /usr/local/nginx/logs/oa-access .log main;
       error_log  /usr/local/nginx/logs/oa-error .log;
  
        location / {
          proxy_pass http: //LB-OA ;
          proxy_redirect off ;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header REMOTE-HOST $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_connect_timeout 300;
          proxy_send_timeout 300;
          proxy_read_timeout 600;
          proxy_buffer_size 256k;
          proxy_buffers 4 256k;
          proxy_busy_buffers_size 256k;
          proxy_temp_file_write_size 256k;
          proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
          proxy_max_temp_file_size 128m;
          proxy_cache mycache;                                
          proxy_cache_valid 200 302 60m;                      
          proxy_cache_valid 404 1m;
         }
}

驗證方法(保證從負載均衡器本機到後端真實服務器之間能正常通訊):
1)首先在本機用IP訪問上面LB.cong中配置的各個後端真實服務器的url
2)而後在本機用域名和路徑訪問上面LB.cong中配置的各個後端真實服務器的域名/虛擬路徑

----------------------------------------------------------------------------------------------------------------------------
後端應用服務器的nginx配置,這裏選擇192.168.1.108做爲例子進行說明
因爲這裏的192.168.1.108機器是openstack的虛擬機,沒有外網ip,不能解析域名。
因此在server_name處也將ip加上,使得用ip也能夠訪問。
[root@108-server ~]# cat /usr/local/nginx/conf/vhosts/svn.conf
server {
listen 80;
#server_name dev.wangshibo.com;
server_name dev.wangshibo.com 192.168.1.108;

access_log /usr/local/nginx/logs/dev.wangshibo-access.log main;
error_log /usr/local/nginx/logs/dev.wangshibo-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}

[root@108-server ~]# ll /var/www/html/
drwxr-xr-x. 2 www www 4096 Dec 7 01:46 submin
drwxr-xr-x. 2 www www 4096 Dec 7 01:45 svn
[root@108-server ~]# cat /var/www/html/svn/index.html
this is the page of svn/192.168.1.108
[root@108-server ~]# cat /var/www/html/submin/index.html
this is the page of submin/192.168.1.108

[root@108-server ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.108 dev.wangshibo.com

[root@108-server ~]# curl http://dev.wangshibo.com       //因爲是內網機器不能聯網,亦不能解析域名。因此用域名訪問沒有反應。只能用ip訪問
[root@ops-server4 vhosts]# curl http://192.168.1.108
this is 192.168.1.108 page!!!
[root@ops-server4 vhosts]# curl http://192.168.1.108/svn/           //最後一個/符號要加上,不然訪問不了。
this is the page of svn/192.168.1.108
[root@ops-server4 vhosts]# curl http://192.168.1.108/submin/
this is the page of submin/192.168.1.108
2.keepalived配置
1)master-node負載機上的keepalived配置sendmail部署能夠參考:linux下sendmail郵件系統安裝操做記錄
[root@master-node ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@master-node ~]# vim /etc/keepalived/keepalived.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
! Configuration File for keepalived     #全局定義
  
global_defs {
notification_email {     #指定keepalived在發生事件時(好比切換)發送通知郵件的郵箱
ops@wangshibo.cn   #設置報警郵件地址,能夠設置多個,每行一個。 需開啓本機的sendmail服務
tech@wangshibo.cn
}
  
notification_email_from ops@wangshibo.cn   #keepalived在發生諸如切換操做時須要發送email通知地址
smtp_server 127.0.0.1      #指定發送email的smtp服務器
smtp_connect_timeout 30    #設置鏈接smtp server的超時時間
router_id master-node     #運行keepalived的機器的一個標識,一般可設爲hostname。故障發生時,發郵件時顯示在郵件主題中的信息。
}
  
vrrp_script chk_http_port {      #檢測nginx服務是否在運行。有不少方式,好比進程,用腳本檢測等等
     script "/opt/chk_nginx.sh"   #這裏經過腳本監測
     interval 2                   #腳本執行間隔,每2s檢測一次
     weight -5                    #腳本結果致使的優先級變動,檢測失敗(腳本返回非0)則優先級 -5
     fall 2                    #檢測連續2次失敗纔算肯定是真失敗。會用weight減小優先級(1-255之間)
     rise 1                    #檢測1次成功就算成功。但不修改優先級
}
  
vrrp_instance VI_1 {    #keepalived在同一virtual_router_id中priority(0-255)最大的會成爲master,也就是接管VIP,當priority最大的主機發生故障後次priority將會接管
     state MASTER    #指定keepalived的角色,MASTER表示此主機是主服務器,BACKUP表示此主機是備用服務器。注意這裏的state指定instance(Initial)的初始狀態,就是說在配置好後,這臺服務器的初始狀態就是這裏指定的,但這裏指定的不算,仍是得要經過競選經過優先級來肯定。若是這裏設置爲MASTER,但如若他的優先級不及另一臺,那麼這臺在發送通告時,會發送本身的優先級,另一臺發現優先級不如本身的高,那麼他會就回搶佔爲MASTER
     interface em1          #指定HA監測網絡的接口。實例綁定的網卡,由於在配置虛擬IP的時候必須是在已有的網卡上添加的
     mcast_src_ip 103.110.98.14  # 發送多播數據包時的源IP地址,這裏注意了,這裏實際上就是在哪一個地址上發送VRRP通告,這個很是重要,必定要選擇穩定的網卡端口來發送,這裏至關於heartbeat的心跳端口,若是沒有設置那麼就用默認的綁定的網卡的IP,也就是interface指定的IP地址
     virtual_router_id 51         #虛擬路由標識,這個標識是一個數字,同一個vrrp實例使用惟一的標識。即同一vrrp_instance下,MASTER和BACKUP必須是一致的
     priority 101                 #定義優先級,數字越大,優先級越高,在同一個vrrp_instance下,MASTER的優先級必須大於BACKUP的優先級
     advert_int 1                 #設定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒
     authentication {             #設置驗證類型和密碼。主從必須同樣
         auth_type PASS           #設置vrrp驗證類型,主要有PASS和AH兩種
         auth_pass 1111           #設置vrrp驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通訊
     }
     virtual_ipaddress {          #VRRP HA 虛擬地址 若是有多個VIP,繼續換行填寫
         103.110.98.20
     }
 
track_script {                      #執行監控的服務。注意這個設置不能緊挨着寫在vrrp_script配置塊的後面(實驗中碰過的坑),不然nginx監控失效!!
    chk_http_port                    #引用VRRP腳本,即在 vrrp_script 部分指定的名字。按期運行它們來改變優先級,並最終引起主備切換。
}
}

2)slave-node負載機上的keepalived配置
[root@slave-node ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@slave-node ~]# vim /etc/keepalived/keepalived.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
! Configuration File for keepalived    
  
global_defs {
notification_email {                
ops@wangshibo.cn                     
tech@wangshibo.cn
}
  
notification_email_from ops@wangshibo.cn  
smtp_server 127.0.0.1                    
smtp_connect_timeout 30                 
router_id slave-node                    
}
  
vrrp_script chk_http_port {         
     script "/opt/chk_nginx.sh"   
     interval 2                      
     weight -5                       
     fall 2                   
     rise 1                  
}
  
vrrp_instance VI_1 {            
     state BACKUP           
     interface em1            
     mcast_src_ip 103.110.98.24  
     virtual_router_id 51        
     priority 99               
     advert_int 1               
     authentication {            
         auth_type PASS         
         auth_pass 1111          
     }
     virtual_ipaddress {        
         103.110.98.20
     }
 
track_script {                     
    chk_http_port                 
}
 
}

then

     /usr/local/nginx/sbin/nginx
     sleep 2
     counter=$( ps -C nginx --no-heading| wc -l)
     if [ "${counter}" = "0" ]; then
         /etc/init .d /keepalived stop
     fi
fi

[root@master-node ~]# chmod 755 /opt/chk_nginx.sh
[root@master-node ~]# sh /opt/chk_nginx.sh
80/tcp open http

103.110.98.20/32 scope global em1
valid_lft forever preferred_lft forever
inet 103.110.98.20/26 brd 103.10.86.63 scope global secondary em1:0
valid_lft forever preferred_lft forever
inet6 fe80::46a8:42ff:fe17:3ddd/64 scope link
valid_lft forever preferred_lft forever
......
3)中止主服務器上的keepalived:
[root@master-node ~]# /etc/init.d/keepalived stop
Stopping keepalived (via systemctl): [ OK ]
[root@master-node ~]# /etc/init.d/keepalived status
[root@master-node ~]# ps -ef|grep keepalived
root 26952 24348 0 17:49 pts/0 00:00:00 grep --color=auto keepalived
[root@master-node ~]#
4)而後在從服務器上查看,發現已經接管了VIP:
[root@slave-node ~]# ip addr
.......
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 44:a8:42:17:3c:a5 brd ff:ff:ff:ff:ff:ff
inet 103.110.98.24/26 brd 103.10.86.63 scope global em1
inet 103.110.98.20/32 scope global em1
valid_lft forever preferred_lft forever
inet 103.110.98.20/26 brd 103.10.86.63 scope global secondary em1:0
valid_lft forever preferred_lft forever
inet6 fe80::46a8:42ff:fe17:3ddd/64 scope link
valid_lft forever preferred_lft forever
......

[root@slave-node ~]# ip addr
.......
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 44:a8:42:17:3c:a5 brd ff:ff:ff:ff:ff:ff
inet 103.110.98.24/26 brd 103.10.86.63 scope global em1
inet6 fe80::46a8:42ff:fe17:3ca5/64 scope link
valid_lft forever preferred_lft forever

接着驗證下nginx服務故障,看看keepalived監控nginx狀態的腳本是否正常?
以下:手動關閉master機器上的nginx服務,最多2秒鐘後就會自動起來(由於keepalive監控nginx狀態的腳本執行間隔時間爲2秒)。域名訪問幾乎不受影響!
[root@master-node ~]# /usr/local/nginx/sbin/nginx -s stop
[root@master-node ~]# ps -ef|grep nginx
root 28401 24826 0 19:43 pts/1 00:00:00 grep --color=auto nginx
[root@master-node ~]# ps -ef|grep nginx
root 28871 28870 0 19:47 ? 00:00:00 /bin/sh /opt/chk_nginx.sh
root 28875 24826 0 19:47 pts/1 00:00:00 grep --color=auto nginx
[root@master-node ~]# ps -ef|grep nginx
root 28408 1 0 19:43 ? 00:00:00 nginx: master process /usr/local/nginx/sbin/nginx
www 28410 28408 0 19:43 ? 00:00:00 nginx: worker process
www 28411 28408 0 19:43 ? 00:00:00 nginx: worker process
www 28412 28408 0 19:43 ? 00:00:00 nginx: worker process
www 28413 28408 0 19:43 ? 00:00:00 nginx: worker process

最後能夠查看兩臺服務器上的/var/log/messages,觀察VRRP日誌信息的vip漂移狀況~~~~

iptables開啓後,沒有開放容許VRRP協議通訊的策略(也有可能致使腦裂);能夠選擇關閉iptables
-> keepalived.conf文件配置有誤致使,好比interface綁定的設備錯誤

2)VIP綁定後,外部ping不通
可能的緣由是:
-> 網絡故障,能夠檢查下網關是否正常;
-> 網關的arp緩存致使,能夠進行arp更新,命令是"arping -I 網卡名 -c 5 -s VIP 網關"

相關文章
相關標籤/搜索