Nginx+Keeplived雙機熱備(主從模式)

 

Nginx+Keeplived雙機熱備(主從模式)

參考資料:javascript

http://www.cnblogs.com/kevingrace/p/6138185.htmlphp

雙機高可用通常是經過虛擬IP(漂移IP)方法來實現的,基於Linux/UnixIP別名技術。css

雙機高可用方法目前分爲兩種:html

1.雙機主從模式:即前端使用兩臺服務器,一臺主服務器和一臺熱備服務器,正常狀況下,主服務器綁定一個公網虛擬IP,提供負載均衡服務,熱備服務器處於空閒狀態;當主服務器發生故障時,熱備服務器接管主服務器的公網虛擬IP,提供負載均衡服務;可是熱備服務器在主機器不出現故障的時候,永遠處於浪費狀態,對於服務器很少的網站,該方案不經濟實惠。前端

2.雙機主主模式:即前端使用兩臺負載均衡服務器,互爲主備,且都處於活動狀態,同事各自綁定一個公網虛擬IP,提供負載均衡服務;當其中一臺發生故障時,另外一臺接管發生故障服務器的公網虛擬IP(這時由非故障機器一臺負擔全部的請求)。這種方案,經濟實惠,很是適合於當前架構環境。java

今天再次分享下Nginx+keeplived實現高可用負載均衡的主從模式的操做記錄:node

keeplived能夠認爲是VRRP協議在Linux上的實現,主要有三個模塊,分別是corecheckvrrplinux

core模塊爲keeplived的核心,負責主進程的啓動、維護以及全局配置文件的加載和解析。nginx

check負責健康檢查,包括建立的各類檢查方式。c++

vrrp模塊是來實現VRRP協議的。

1、環境說明

操做系統:CentOS release 6.9 (Final) minimal

web1172.16.12.223

web2172.16.12.224

vipsvn172.16.12.226

svn172.16.12.225

 

2、環境安裝

安裝nginxkeeplived服務(web1web2兩臺服務器上的安裝徹底同樣)

 

2.1、安裝依賴

yum  clean all
yum -y update
yum -y install gcc-c++ gd libxml2-devel libjpeg-devel libpng-devel net-snmp-devel wget telnet vim zip unzip 
yum -y install curl-devel libxslt-devel pcre-devel libjpeg libpng libcurl4-openssl-dev 
yum -y install libcurl-devel libcurl freetype-config freetype freetype-devel unixODBC libxslt 
yum -y install gcc automake autoconf libtool openssl-devel
yum -y install perl-devel perl-ExtUtils-Embed 
yum -y install cmake ncurses-devel.x86_64  openldap-devel.x86_64 lrzsz  openssh-clients gcc-g77  bison 
yum -y install libmcrypt libmcrypt-devel mhash mhash-devel bzip2 bzip2-devel
yum -y install ntpdate rsync svn  patch  iptables iptables-services
yum -y install libevent libevent-devel  cyrus-sasl cyrus-sasl-devel
yum -y install gd-devel libmemcached-devel memcached git libssl-devel libyaml-devel auto make
yum -y groupinstall "Server Platform Development" "Development tools"
yum -y groupinstall "Development tools"
yum -y install gcc pcre-devel zlib-devel openssl-devel

2.2Centos6系統安裝完畢後,須要優化的地方

#關閉SELinux
sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config
grep SELINUX=disabled /etc/selinux/config
setenforce 0
getenforce
cat >> /etc/sysctl.conf << EOF
#
##custom
#
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096    87380   4194304
net.ipv4.tcp_wmem = 4096    16384   4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
#net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_synack_retries = 2
#net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
#net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024    65535
#net.ipv4.tcp_tw_len = 1
EOF

#使其生效
sysctl -p
cp /etc/security/limits.conf /etc/security/limits.conf.bak2017
cat >> /etc/security/limits.conf << EOF
#
###custom
#
*           soft   nofile       20480
*           hard   nofile       65535
*           soft   nproc        20480
*           hard   nproc        65535
EOF

2.3、修改shell終端的超時時間

vi /etc/profile 增長以下一行便可(3600秒,默認不超時)
cp   /etc/profile   /etc/profile.bak2017
cat >> /etc/profile << EOF
export TMOUT=1800
EOF

2.4、下載軟件包

(master和slave兩臺負載均衡機都要作)
[root@web1 ~]# cd /usr/local/src/
 [root@web1 src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz
 [root@web1 src]# wget http://www.keepalived.org/software/keepalived-1.3.2.tar.gz

2.5、安裝nginx

(master和slave兩臺負載均衡機都要作)
[root@web1 src]# tar -zxvf nginx-1.9.7.tar.gz
[root@web1 nginx-1.9.7]# cd nginx-1.9.7
# 添加www用戶,其中-M參數表示不添加用戶家目錄,-s參數表示指定shell類型
[root@web1 nginx-1.9.7]# useradd www -M -s /sbin/nologin 
[root@web1 nginx-1.9.7]# vim auto/cc/gcc 
#將這句註釋掉 取消Debug編譯模式 大概在179行
# debug
# CFLAGS="$CFLAGS -g"
[root@web1 nginx-1.9.7]#  ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre
[root@web1 nginx-1.9.7]#  make && make install

2.6、安裝keeplived

(master和slave兩臺負載均衡機都要作)
[root@web1 nginx-1.9.7]# cd /usr/local/src/
[root@web1 src]# tar -zvxf keepalived-1.3.2.tar.gz 
[root@web1 src]# cd keepalived-1.3.2
[root@web1 keepalived-1.3.2]# ./configure 
[root@web1 keepalived-1.3.2]# make && make install
[root@web1 keepalived-1.3.2]# cp /usr/local/src/keepalived-1.3.2/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@web1 keepalived-1.3.2]#  cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
[root@web1 keepalived-1.3.2]# mkdir /etc/keepalived
[root@web1 keepalived-1.3.2]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
[root@web1 keepalived-1.3.2]# cp /usr/local/sbin/keepalived /usr/sbin/
[root@web1 keepalived-1.3.2]# echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.local
[root@web1 keepalived-1.3.2]# echo "/etc/init.d/keepalived start" >> /etc/rc.local

3、配置服務

3.1、關閉selinux

 

先關閉SElinux、配置防火牆 (master和slave兩臺負載均衡機都要作)
[root@web1 keepalived-1.3.2]# cd /root/
[root@web1 ~]#sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config
[root@web1 ~]#grep SELINUX=disabled /etc/selinux/config
[root@web1 ~]#setenforce 0

 

3.2、關閉防火牆

[root@web1 ~]# /etc/init.d/iptables  stop

3.3、配置nginx

  master-slave兩臺服務器的nginx的配置徹底同樣,主要是配置/usr/local/nginx/conf/nginx.confhttp,固然也能夠配置vhost虛擬主機目錄,而後配置vhost下的好比LB.conf文件。
其中:
多域名指向是經過虛擬主機(配置http下面的server)實現;
同一域名的不一樣虛擬目錄經過每一個server下面的不一樣location實現;
到後端的服務器在vhost/LB.conf下面配置upstream,而後在serverlocation中經過proxy_pass引用。

要實現前面規劃的接入方式,LB.conf的配置以下(添加proxy_cache_pathproxy_temp_path這兩行,表示打開nginx的緩存功能):

 

[root@web1 ~]# vim /usr/local/nginx/conf/nginx.conf
user  www;
worker_processes  8;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  65535;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
    charset utf-8;
    ######
    ### set access log format
    #######
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;
    #######
    ## http setting
    #######

    sendfile        on;
    #tcp_nopush     on;
    tcp_nopush     on;
    tcp_nodelay    on;
    keepalive_timeout  65;
    proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;
    proxy_temp_path /var/www/cache/tmp;

    fastcgi_connect_timeout 3000;
    fastcgi_send_timeout 3000;
    fastcgi_read_timeout 3000;
    fastcgi_buffer_size 256k;
    fastcgi_buffers 8 256k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_intercept_errors on;


    #keepalive_timeout  0;
    #keepalive_timeout  65;

    #
    client_header_timeout 600s;
    client_body_timeout 600s;
   # client_max_body_size 50m;
    client_max_body_size 100m;               #容許客戶端請求的最大單個文件字節數
    client_body_buffer_size 256k;            #緩衝區代理緩衝請求的最大字節數,能夠理解爲先保存到本地再傳給用戶


    #gzip  on;
    gzip_min_length  1k;
    gzip_buffers     4 16k;
    gzip_http_version 1.1;
    gzip_comp_level 9;
    gzip_types       text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php;
    gzip_vary on;

## includes vhosts
    include vhosts/*.conf;
}

# 建立相應的目錄
[root@web1 ~]# mkdir -p /usr/local/nginx/conf/vhosts
[root@web1 ~]# mkdir -p /var/www/cache
[root@web1 ~]# ulimit 65535
[root@web2 ~]# vim /usr/local/nginx/conf/vhosts/LB.conf
upstream LB-WWW {
      ip_hash;
      server 172.16.12.223:80 max_fails=3 fail_timeout=30s;     #max_fails = 3 爲容許失敗的次數,默認值爲1
      server 172.16.12.224:80 max_fails=3 fail_timeout=30s;     #fail_timeout = 30s 當max_fails次失敗後,暫停將請求分發到該後端服務器的時間
      server 172.16.12.225:80 max_fails=3 fail_timeout=30s;
    }
    
upstream LB-OA {
      ip_hash;
      server 172.16.12.223:8080 max_fails=3 fail_timeout=30s;
      server 172.16.12.224:8080 max_fails=3 fail_timeout=30s;
}
          
  server {
      listen      80;
      server_name localhost;
    
      access_log  /usr/local/nginx/logs/dev-access.log main;
      error_log  /usr/local/nginx/logs/dev-error.log;
    
      location /svn {
         proxy_pass http://172.16.12.226/svn/;
         proxy_redirect off ;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_connect_timeout 300;             #跟後端服務器鏈接超時時間,發起握手等候響應時間
         proxy_send_timeout 300;                #後端服務器回傳時間,就是在規定時間內後端服務器必須傳完全部數據
         proxy_read_timeout 600;                #鏈接成功後等待後端服務器的響應時間,已經進入後端的排隊之中等候處理
         proxy_buffer_size 256k;                #代理請求緩衝區,會保存用戶的頭信息以供nginx進行處理
         proxy_buffers 4 256k;                  #同上,告訴nginx保存單個用幾個buffer最大用多少空間
         proxy_busy_buffers_size 256k;          #若是系統很忙時候能夠申請最大的proxy_buffers
         proxy_temp_file_write_size 256k;       #proxy緩存臨時文件的大小
         proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
         proxy_max_temp_file_size 128m;
         proxy_cache mycache;                                
         proxy_cache_valid 200 302 60m;                      
         proxy_cache_valid 404 1m;
       }
    
      location /submin {
         proxy_pass http://172.16.12.226/submin/;
         proxy_redirect off ;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_connect_timeout 300;
         proxy_send_timeout 300;
         proxy_read_timeout 600;
         proxy_buffer_size 256k;
         proxy_buffers 4 256k;
         proxy_busy_buffers_size 256k;
         proxy_temp_file_write_size 256k;
         proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
         proxy_max_temp_file_size 128m;
         proxy_cache mycache;        
         proxy_cache_valid 200 302 60m;
         proxy_cache_valid 404 1m;
        }
    }
    
server {
     listen       80;
     server_name  localhost;
  
      access_log  /usr/local/nginx/logs/www-access.log main;
      error_log  /usr/local/nginx/logs/www-error.log;
  
     location / {
         proxy_pass http://LB-WWW;
         proxy_redirect off ;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_connect_timeout 300;
         proxy_send_timeout 300;
         proxy_read_timeout 600;
         proxy_buffer_size 256k;
         proxy_buffers 4 256k;
         proxy_busy_buffers_size 256k;
         proxy_temp_file_write_size 256k;
         proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
         proxy_max_temp_file_size 128m;
         proxy_cache mycache;                                
         proxy_cache_valid 200 302 60m;                      
         proxy_cache_valid 404 1m;
        }
}
   
 server {
       listen       80;
       server_name  localhost;
  
      access_log  /usr/local/nginx/logs/oa-access.log main;
      error_log  /usr/local/nginx/logs/oa-error.log;
  
       location / {
         proxy_pass http://LB-OA;
         proxy_redirect off ;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_connect_timeout 300;
         proxy_send_timeout 300;
         proxy_read_timeout 600;
         proxy_buffer_size 256k;
         proxy_buffers 4 256k;
         proxy_busy_buffers_size 256k;
         proxy_temp_file_write_size 256k;
         proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
         proxy_max_temp_file_size 128m;
         proxy_cache mycache;                                
         proxy_cache_valid 200 302 60m;                      
         proxy_cache_valid 404 1m;
        }
}

 

3.4、驗證準備

3.4.1、在svn服務器上執行

cat >/usr/local/nginx/conf/vhosts/svn.conf <<EOF
server {
listen 80;
server_name svn 172.16.12.225;

access_log /usr/local/nginx/logs/svn-access.log main;
error_log /usr/local/nginx/logs/svn-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
EOF
[root@svn ~]# cat /usr/local/nginx/conf/vhosts/svn.conf
server {
listen 80;
server_name svn 172.16.12.225;

access_log /usr/local/nginx/logs/svn-access.log main;
error_log /usr/local/nginx/logs/svn-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
[root@svn ~]# 
[root@svn ~]# mkdir -p /var/www/html
[root@svn ~]# mkdir -p /var/www/html/submin
[root@svn ~]# mkdir -p /var/www/html/svn
[root@svn ~]# cat /var/www/html/svn/index.html
this is the page of svn/172.16.12.225
[root@svn ~]#  cat /var/www/html/submin/index.html
this is the page of submin/172.16.12.225
[root@svn ~]# chown -R www.www /var/www/html/
[root@svn ~]# chmod -R 755 /var/www/html/
[root@svn ~]# cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.12.223 web1
172.16.12.224 web2
172.16.12.225 svn
[root@svn ~]# tail -4  /etc/rc.local 
touch /var/lock/subsys/local
/etc/init.d/iptables stop
/usr/local/nginx/sbin/nginx
/etc/init.d/keepalived start

# 啓動nginx
[root@svn ~]# /usr/local/nginx/sbin/nginx 
# 訪問網址
[root@svn local]# curl http://172.16.12.225/submin/
this is the page of submin/172.16.12.225
[root@svn local]# curl http://172.16.12.225/svn/
this is the page of svn/172.16.12.225

3.4.1、在web1上執行

[root@web1 ~]# curl http://172.16.12.225/submin/
this is the page of submin/172.16.12.225
[root@web1 ~]# curl http://172.16.12.225/svn/
this is the page of svn/172.16.12.225

cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF
server {
listen 80;
server_name web 172.16.12.223;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
EOF

[root@web1 ~]# cat /usr/local/nginx/conf/vhosts/web.conf
server {
listen 80;
server_name web 172.16.12.223;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}

[root@web1 ~]# mkdir -p /var/www/html
[root@web1 ~]# mkdir -p /var/www/html/web
[root@web1 ~]# cat  /var/www/html/web/index.html
this is the page of web/172.16.12.223
[root@web1 ~]# chown -R www.www /var/www/html/
[root@web1 ~]# chmod -R 755 /var/www/html/
[root@web1 ~]# cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.12.223 web1
172.16.12.224 web2
172.16.12.225 svn
[root@web1 ~]# tail -4  /etc/rc.local 
touch /var/lock/subsys/local
/etc/init.d/iptables stop
/usr/local/nginx/sbin/nginx
/etc/init.d/keepalived start
[root@web1 ~]# /usr/local/nginx/sbin/nginx
[root@web1 ~]# curl http://172.16.12.223/web/
this is the page of web/172.16.12.223

2.4.2、在web2上執行

[root@web2 ~]# curl http://172.16.12.225/submin/
this is the page of submin/172.16.12.225
[root@web2 ~]# curl http://172.16.12.225/svn/
this is the page of svn/172.16.12.225

cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF
server {
listen 80;
server_name web 172.16.12.224;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
EOF

[root@web2 ~]# cat /usr/local/nginx/conf/vhosts/web.conf
server {
listen 80;
server_name web 172.16.12.224;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
[root@web2 ~]# 
[root@web2 ~]# mkdir -p /var/www/html
[root@web2 ~]# mkdir -p /var/www/html/web
[root@web2 ~]# cat /var/www/html/web/index.html
this is the page of web/172.16.12.224
[root@web2 ~]#  cat /var/www/html/web/index.html
this is the page of web/172.16.12.224
[root@web2 ~]# chown -R www.www /var/www/html/
[root@web2 ~]# chmod -R 755 /var/www/html/
[root@web2 ~]# cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.12.223 web1
172.16.12.224 web2
172.16.12.225 svn
[root@web2 ~]# tail -4  /etc/rc.local 
touch /var/lock/subsys/local
/etc/init.d/iptables stop
/usr/local/nginx/sbin/nginx
/etc/init.d/keepalived start

# 啓動nginx
[root@web2 ~]# /usr/local/nginx/sbin/nginx 
# 訪問網址
[root@web2 local]# curl http://172.16.12.224/web/
this is the page of web/172.16.12.224

2.4.3、瀏覽器測試

4、keeplived配置

4.1web1上的操做

[root@web1 ~]#  cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@web1 ~]#  vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived     #全局定義
  
global_defs {
# notification_email {     #指定keepalived在發生事件時(好比切換)發送通知郵件的郵箱
# ops@wangshibo.cn   #設置報警郵件地址,能夠設置多個,每行一個。 需開啓本機的sendmail服務
# tech@wangshibo.cn
# }
#   
# notification_email_from ops@wangshibo.cn   #keepalived在發生諸如切換操做時須要發送email通知地址
# smtp_server 127.0.0.1      #指定發送email的smtp服務器
# smtp_connect_timeout 30    #設置鏈接smtp server的超時時間
router_id master-node     #運行keepalived的機器的一個標識,一般可設爲hostname。故障發生時,發郵件時顯示在郵件主題中的信息。
}
  
vrrp_script chk_http_port {      #檢測nginx服務是否在運行。有不少方式,好比進程,用腳本檢測等等
    script "/opt/chk_nginx.sh"   #這裏經過腳本監測
    interval 2                   #腳本執行間隔,每2s檢測一次
    weight -5                    #腳本結果致使的優先級變動,檢測失敗(腳本返回非0)則優先級 -5
    fall 2                    #檢測連續2次失敗纔算肯定是真失敗。會用weight減小優先級(1-255之間)
    rise 1                    #檢測1次成功就算成功。但不修改優先級
}
  
vrrp_instance VI_1 {    #keepalived在同一virtual_router_id中priority(0-255)最大的會成爲master,也就是接管VIP,當priority最大的主機發生故障後次priority將會接管
    state MASTER    #指定keepalived的角色,MASTER表示此主機是主服務器,BACKUP表示此主機是備用服務器。注意這裏的state指定instance(Initial)的初始狀態,就是說在配置好後,這臺服務器的初始狀態就是這裏指定的,但這裏指定的不算,仍是得要經過競選經過優先級來肯定。若是這裏設置爲MASTER,但如若他的優先級不及另一臺,那麼這臺在發送通告時,會發送本身的優先級,另一臺發現優先級不如本身的高,那麼他會就回搶佔爲MASTER
    interface eth1          #指定HA監測網絡的接口。實例綁定的網卡,由於在配置虛擬IP的時候必須是在已有的網卡上添加的
#    mcast_src_ip 103.110.98.14  # 發送多播數據包時的源IP地址,這裏注意了,這裏實際上就是在哪一個地址上發送VRRP通告,這個很是重要,必定要選擇穩定的網卡端口來發送,這裏至關於heartbeat的心跳端口,若是沒有設置那麼就用默認的綁定的網卡的IP,也就是interface指定的IP地址
    virtual_router_id 226         #虛擬路由標識,這個標識是一個數字,同一個vrrp實例使用惟一的標識。即同一vrrp_instance下,MASTER和BACKUP必須是一致的
    priority 101                 #定義優先級,數字越大,優先級越高,在同一個vrrp_instance下,MASTER的優先級必須大於BACKUP的優先級
    advert_int 1                 #設定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒
    authentication {             #設置驗證類型和密碼。主從必須同樣
        auth_type PASS           #設置vrrp驗證類型,主要有PASS和AH兩種
        auth_pass 1111           #設置vrrp驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通訊
    }
    virtual_ipaddress {          #VRRP HA 虛擬地址 若是有多個VIP,繼續換行填寫
        172.16.12.226
    }
 
track_script {                      #執行監控的服務。注意這個設置不能緊挨着寫在vrrp_script配置塊的後面(實驗中碰過的坑),不然nginx監控失效!!
   chk_http_port                    #引用VRRP腳本,即在 vrrp_script 部分指定的名字。按期運行它們來改變優先級,並最終引起主備切換。
}
}

4.2web2上的操做

[root@web2 ~]#  cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@web2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived    
  
global_defs {
#  notification_email {                
#  ops@wangshibo.cn                     
#  tech@wangshibo.cn
#  }
#    
#  notification_email_from ops@wangshibo.cn  
#  smtp_server 127.0.0.1                    
#  smtp_connect_timeout 30                 
router_id slave-node                    
}
  
vrrp_script chk_http_port {         
    script "/opt/chk_nginx.sh"   
    interval 2                      
    weight -5                       
    fall 2                   
    rise 1                  
}
  
vrrp_instance VI_1 {            
    state BACKUP           
    interface eth1           
#    mcast_src_ip 103.110.98.24  
    virtual_router_id 226        
    priority 99               
    advert_int 1               
    authentication {            
        auth_type PASS         
        auth_pass 1111          
    }
    virtual_ipaddress {        
        172.16.12.226
    }
 
track_script {                     
   chk_http_port                 
}
 
}

4.3、監控說明

  讓keepalived監控NginX的狀態:
1)通過前面的配置,若是master主服務器的keepalived中止服務,slave從服務器會自動接管VIP對外服務;
一旦主服務器的keepalived恢復,會從新接管VIP。 但這並非咱們須要的,咱們須要的是當NginX中止服務的時候可以自動切換。
2keepalived支持配置監控腳本,咱們能夠經過腳本監控NginX的狀態,若是狀態不正常則進行一系列的操做,最終仍不能恢復NginX則殺掉keepalived,使得從服務器可以接管服務。

如何監控NginX的狀態
最簡單的作法是監控NginX進程更靠譜的作法是檢查NginX端口最靠譜的作法是檢查多個url可否獲取到頁面

注意:這裏要提示一下keepalived.confvrrp_script配置區的script通常有2種寫法:
1)經過腳本執行的返回結果,改變優先級,keepalived繼續發送通告消息,backup比較優先級再決定。這是直接監控Nginx進程的方式。
2)腳本里面檢測到異常,直接關閉keepalived進程,backup機器接收不到advertisement會搶佔IP。這是檢查NginX端口的方式。
上文script配置部分,"killall -0 nginx"屬於第1種狀況"/opt/chk_nginx.sh" 屬於第2種狀況。我的更傾向於經過shell腳本判斷,但有異常時exit 1,正常退出exit 0,而後keepalived根據動態調整的 vrrp_instance 優先級選舉決定是否搶佔VIP
若是腳本執行結果爲0,而且weight配置的值大於0,則優先級相應的增長
若是腳本執行結果非0,而且weight配置的值小於0,則優先級相應的減小
其餘狀況,本來配置的優先級不變,即配置文件中priority對應的值。

提示:
優先級不會不斷的提升或者下降
能夠編寫多個檢測腳本併爲每一個檢測腳本設置不一樣的weight(在配置中列出就行)
無論提升優先級仍是下降優先級,最終優先級的範圍是在[1,254],不會出現優先級小於等於0或者優先級大於等於255的狀況
MASTER節點的 vrrp_instance 中 配置 nopreempt ,當它異常恢復後,即便它 prio 更高也不會搶佔,這樣能夠避免正常狀況下作無謂的切換
以上能夠作到利用腳本檢測業務進程的狀態,並動態調整優先級從而實現主備切換。

另外:在默認的keepalive.conf裏面還有 virtual_server,real_server 這樣的配置,咱們這用不到,它是爲lvs準備的。

如未嘗試恢復服務
因爲keepalived只檢測本機和他機keepalived是否正常並實現VIP的漂移,而若是本機nginx出現故障不會則不會漂移VIP
因此編寫腳原本判斷本機nginx是否正常,若是發現NginX不正常,重啓之。等待3秒再次校驗,仍然失敗則再也不嘗試,關閉keepalived,其餘主機此時會接管VIP

根據上述策略很容易寫出監控腳本。此腳本必須在keepalived服務運行的前提下才有效!若是在keepalived服務先關閉的狀況下,那麼nginx服務關閉後就不能實現自啓動了。
該腳本檢測ngnix的運行狀態,並在nginx進程不存在時嘗試從新啓動ngnix,若是啓動失敗則中止keepalived,準備讓其它機器接管。

4.4、監控腳本

監控腳本以下(master和slave都要有這個監控腳本):
[root@web1 ~]# cat  /opt/chk_nginx.sh
#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${counter}" = "0" ]; then
    /usr/local/nginx/sbin/nginx
    sleep 2
    counter=$(ps -C nginx --no-heading|wc -l)
    if [ "${counter}" = "0" ]; then
        /etc/init.d/keepalived stop
    fi
fi
[root@web1 ~]# 
[root@web1 ~]#  chmod 755 /opt/chk_nginx.sh
[root@web1 ~]# sh /opt/chk_nginx.sh


[root@web2 ~]# cat  /opt/chk_nginx.sh
#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${counter}" = "0" ]; then
    /usr/local/nginx/sbin/nginx
    sleep 2
    counter=$(ps -C nginx --no-heading|wc -l)
    if [ "${counter}" = "0" ]; then
        /etc/init.d/keepalived stop
    fi
fi
[root@web2 ~]# 
[root@web2 ~]#  chmod 755 /opt/chk_nginx.sh
[root@web2 ~]# sh /opt/chk_nginx.sh

4.5、須要考慮的問題

此架構需考慮的問題
1master沒掛,則master佔有vipnginx運行在master
2master掛了,則slave搶佔vip且在slave上運行nginx服務
3)若是master上的nginx服務掛了,則nginx會自動重啓,重啓失敗後會自動關閉keepalived,這樣vip資源也會轉移到slave上。
4)檢測後端服務器的健康狀態
5masterslave兩邊都開啓nginx服務,不管master仍是slave,當其中的一個keepalived服務中止後,vip都會漂移到keepalived服務還在的節點上;
若是要想使nginx服務掛了,vip也漂移到另外一個節點,則必須用腳本或者在配置文件裏面用shell命令來控制。(nginx服務宕停後會自動啓動,啓動失敗後會強制關閉keepalived,從而導致vip資源漂移到另外一臺機器上)

 

5、最後驗證

 

 

最後驗證(將配置的後端應用域名都解析到VIP地址上):關閉主服務器上的keepalivednginxvip都會自動飄到從服務器上。

驗證keepalived服務故障狀況:

1)前後在masterslave服務器上啓動nginxkeepalived,保證這兩個服務都正常開啓:

 

[root@web2 ~]# /usr/local/nginx/sbin/nginx  -s stop
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[root@web2 ~]# /etc/init.d/keepalived  stop
Stopping keepalived:                                       [FAILED]
[root@web2 ~]# 

[root@web1 ~]# /usr/local/nginx/sbin/nginx  -s stop
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[root@web1 ~]# /etc/init.d/keepalived  stop
Stopping keepalived:                                       [FAILED]
[root@web1 ~]# 
[root@web1 ~]# /usr/local/nginx/sbin/nginx
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[root@web1 ~]# /etc/init.d/keepalived  start
Starting keepalived:                                       [  OK  ]

 

2在主服務器上查看是否已經綁定了虛擬IP

 

[root@web1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.223/24 brd 10.0.2.255 scope global eth0
    inet 172.16.12.226/32 scope global eth0
    inet6 fe80::a00:27ff:feca:9956/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ff
    inet 172.16.12.223/24 brd 172.16.12.255 scope global eth1
    inet6 fe80::a00:27ff:feb3:a936/64 scope link 
       valid_lft forever preferred_lft forever

[root@web2 ~]# /usr/local/nginx/sbin/nginx
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[root@web2 ~]# /etc/init.d/keepalived  start
Starting keepalived:                                       [  OK  ]
[root@web2 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.224/24 brd 10.0.2.255 scope global eth0
    inet6 fe80::a00:27ff:fe9a:b97/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.12.224/24 brd 172.16.12.255 scope global eth1
    inet6 fe80::a00:27ff:fe63:261a/64 scope link 
       valid_lft forever preferred_lft forever
[root@web2 ~]# 

 

5.1、修改網站配置

[root@web1 ~]# cat  /usr/local/nginx/conf/vhosts/web.conf
server {
listen 80;
server_name localhost 172.16.12.223 172.16.12.226;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
[root@web1 ~]# 

[root@web2 ~]# cat  /usr/local/nginx/conf/vhosts/web.conf
server {
listen 80;
server_name localhost 172.16.12.224 172.16.12.226;

access_log /usr/local/nginx/logs/web-access.log main;
error_log /usr/local/nginx/logs/web-error.log;

location / {
root /var/www/html;
index index.html index.php index.htm;
}
}
[root@web2 ~]# 

5.2、訪問驗證

5.3、中止主服務器的keeplived服務

[root@web1 ~]# /etc/init.d/keepalived  stop
Stopping keepalived:                                       [  OK  ]
[root@web1 ~]# 
[root@web1 ~]# tail -f  /var/log/messages
Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:12 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 added
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:43:51 web1 Keepalived[7956]: Stopping
Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) sent 0 priority
Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) removing protocol VIPs.
Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 removed
Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Stopped
Dec 14 13:43:52 web1 Keepalived_vrrp[7959]: Stopped
Dec 14 13:43:52 web1 Keepalived[7956]: Stopped Keepalived v1.3.2 (12/14,2017)

[root@web1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.223/24 brd 10.0.2.255 scope global eth0
    inet6 fe80::a00:27ff:feca:9956/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ff
    inet 172.16.12.223/24 brd 172.16.12.255 scope global eth1
    inet6 fe80::a00:27ff:feb3:a936/64 scope link 
       valid_lft forever preferred_lft forever
[root@web1 ~]# 

5.4、在web2查看切換狀況

[root@web2 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.224/24 brd 10.0.2.255 scope global eth0
    inet 172.16.12.226/32 scope global eth0
    inet6 fe80::a00:27ff:fe9a:b97/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.12.224/24 brd 172.16.12.255 scope global eth1
    inet6 fe80::a00:27ff:fe63:261a/64 scope link 
       valid_lft forever preferred_lft forever
[root@web2 ~]# 

[root@web2 ~]# tail -f /var/log/messages
Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:33 web2 Keepalived_healthcheckers[8186]: Netlink reflector reports IP 172.16.12.226 added
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226
Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226

5.5、訪問網頁驗證

切換前的網頁:

 

 

切換後的網頁:

 

 

說明已經切換完畢。

相關文章
相關標籤/搜索