LVS+KEEPALIVED實現負載均高可用集羣

LVS+KEEPALIVED負載均衡高可用集羣

經過命令檢測kernel是否已經支持LVS的ipvs模塊linux

[root@www ~]# modprobe -l|grep ipvsweb

kernel/net/netfilter/ipvs/ip_vs.kovim

kernel/net/netfilter/ipvs/ip_vs_rr.ko服務器

kernel/net/netfilter/ipvs/ip_vs_wrr.ko網絡

kernel/net/netfilter/ipvs/ip_vs_lc.ko負載均衡

kernel/net/netfilter/ipvs/ip_vs_wlc.kotcp

kernel/net/netfilter/ipvs/ip_vs_lblc.koide

kernel/net/netfilter/ipvs/ip_vs_lblcr.kooop

kernel/net/netfilter/ipvs/ip_vs_dh.kothis

kernel/net/netfilter/ipvs/ip_vs_sh.ko

kernel/net/netfilter/ipvs/ip_vs_sed.ko

kernel/net/netfilter/ipvs/ip_vs_nq.ko

kernel/net/netfilter/ipvs/ip_vs_ftp.ko

有藍色的兩項代表系統內核已經默認支持IPVS

下面就能夠進行安裝

兩種方式

第一:源碼包安裝

Make

Make install

可能會出錯

因爲編譯程序找不到對應的內核

解決:ln -s /usr/src/kernels/2.6.32-279.el6.x86_64/ /usr/src/linux

第二種:

Rpm包安裝

紅帽的LoadBalancer是有這個包的

[root@www ~]# yum list |grep ipvs

ipvsadm.x86_64                      1.25-10.el6                 LoadBalancer

 

[root@www ~]# yum install ipvsadm.x86_64

Loaded plugins: product-id, subscription-manager

Updating certificate-based repositories.

Unable to read consumer identity

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package ipvsadm.x86_64 0:1.25-10.el6 will be installed

--> Finished Dependency Resolution

 

Dependencies Resolved

 

=========================================================================================================================================

 Package                       Arch                         Version                             Repository                          Size

=========================================================================================================================================

Installing:

 ipvsadm       x86_64       1.25-10.el6     LoadBalancer               41 k

 

Transaction Summary

=========================================================================================================================================

Install       1 Package(s)

 

Total download size: 41 k

Installed size: 74 k

Is this ok [y/N]: y

Downloading Packages:

ipvsadm-1.25-10.el6.x86_64.rpm                                                                                    |  41 kB     00:00    

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Installed products updated.

 Verifying:                                   ipvsadm-1.25-10.el6.x86_64                                1/1

 

Installed:

  ipvsadm.x86_64 0:1.25-10.el6                                                                                                          

 

Complete!

[root@www ~]# ipvsadm --help

ipvsadm v1.25 2008/5/15 (compiled with popt and IPVS v1.2.1)

Usage:

  ipvsadm -A|E -t|u|f service-address [-s scheduler] [-p [timeout]] [-M netmask]

  ipvsadm -D -t|u|f service-address

  ipvsadm -C

  ipvsadm -R

  ipvsadm -S [-n]

  ipvsadm -a|e -t|u|f service-address -r server-address [options]

  ipvsadm -d -t|u|f service-address -r server-address

  ipvsadm -L|l [options]

  ipvsadm -Z [-t|u|f service-address]

  ipvsadm --set tcp tcpfin udp

  ipvsadm --start-daemon state [--mcast-interface interface] [--syncid sid]

  ipvsadm --stop-daemon state

  若是看到提示代表IPVS已經安裝成功

開始配置LVS兩種方式

Ipvsadmin和prianha來進行配置

第一種ipvsadmin

  638  ipvsadm -A -t 192.168.1.250:80 -s rr -p 600添加一個虛擬ip採用輪詢的方式每一個節點保持10分鐘

  639  ipvsadm -a -t 192.168.1.250:80 -r 192.168.1.200:80 -g添加real server -g表示DR方式

  640  ipvsadm -a -t 192.168.1.250:80 -r 192.168.1.201:80 -g添加real server -g表示DR方式

  641  ifconfig eth0:0 192.168.1.250 netmask 255.255.255.0 up 添加虛擬ip

  643  route add -host 192.168.1.250 dev eth0:0添加虛擬ip路由

  645  echo "1" > /proc/sys/net/ipv4/ip_forward添加路由轉發,Dr不是必須的NAT是必須的

[root@www ~]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.1.250:http rr

  -> www.ha1.com:http             Route   1      0          0        

  -> www.ha2.com:http             Route   1      0          0

方便管理的話能夠作一個啓動和中止腳本

#!/bin/sh

# description: Start LVS of Director server

VIP=192.168.1.250

RIP1=192.168.1.200

RIP2=192.168.1.201

./etc/rc.d/init.d/functions

case "$1" in

    start)

        echo " start LVS of Director Server"

# set the Virtual  IP Address and sysctl parameter

 /sbin/ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.255 up

       echo "1" >/proc/sys/net/ipv4/ip_forward

#Clear IPVS table

       /sbin/ipvsadm -C

#set LVS

/sbin/ipvsadm -A -t $VIP:80 -s rr -p 600

/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g

/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g

#Run LVS

      /sbin/ipvsadm

       ;;

    stop)

        echo "close LVS Directorserver"

        echo "0" >/proc/sys/net/ipv4/ip_forward

        /sbin/ipvsadm -C

        /sbin/ifconfig eth0:0 down

        ;;

    *)

        echo "Usage: $0 {start|stop}"

        exit 1

esac

 

第二種方式採用piranha來配置lvs,也能夠採用網頁登錄piranha方式進行配置

首先安裝piranha

yum install piranha

piranha安裝完畢後,會產生/etc/sysconfig/ha/lvs.cf文件

編輯這個文件

[root@www ha]# vim /etc/sysconfig/ha/lvs.cf

service = lvs

primary = 192.168.1.202

backup = 0.0.0.0

backup_active = 0

keepalive = 6

deadtime = 10

debug_level = NONE

network = direct

virtual server1 {

        address = 192.168.1.250 eth0:0

 #       vip_nmask = 255.255.255.255

        active = 1

        load_monitor = none

        timeout = 5

        reentry = 10

        port = http

        send = "GET / HTTP/1.0\r\n\r\n"

        expect = "HTTP"

        scheduler = rr

        protocol = tcp

        # sorry_server = 127.0.0.1

 

        server Real1 {

                address = 192.168.1.200

                active = 1

                weight = 1

        }

 

        server Real2 {

                address = 192.168.1.201

                active = 1

                weight = 1

        }

}

[root@www ha]# /etc/init.d/pulse status

pulse (pid  3211) is running...

[root@www ha]# cat  /proc/sys/net/ipv4/ip_forward

1

[root@www ha]# route add -host 192.168.1.250 dev eth0:0

 

Relserver配置

VIP=192.168.1.250

/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

/sbin/route add -host $VIP dev lo:0

echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce

sysctl -p

防止arp

驗證

 

[root@www ha]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.1.250:http rr

  -> www.ha1.com:http             Route   1      0          5        

  -> www.ha2.com:http             Route   1      0          5

當咱們關掉一臺節點的httd進程時

機子會把這臺節點提出去

 

這時候就只會訪問ha1的站點了

 

造成了負載均衡

剩下的就是對這兩臺LVS作keepalived高可用 也就是使用vrrp

安裝keepalived

[root@www keepalived]# rpm -Uvh ~/keepalived-1.2.7-5.x86_64.rpm

配置keepalived.conf

vrrp_instance VI_1 {

        state MASTER            # master

        interface eth0

        lvs_sync_daemon_interface eth1

        virtual_router_id 50    # unique, but master and backup is same

        priority 150

        advert_int 1

        authentication {

                auth_type PASS

                auth_pass uplooking

        }

        virtual_ipaddress {

                192.168.1.250  # vip

        }

}

virtual_server 192.168.1.250 80 {

        delay_loop 20

        lb_algo rr

        lb_kind DR

        nat_mask 255.255.255.0

        protocol TCP

 

        real_server 192.168.1.200 80 {

                weight 1

                TCP_CHECK {

                        connect_timeout 3

                }

        }

        real_server 192.168.1.201 80 {

                weight 1

                TCP_CHECK {

                        connect_timeout 3

                }

        }

}

完整驗證:

第一步:keepalived節點down掉一臺

 

從上圖中能夠看出虛擬ip已經不在這臺ma1上面

下面咱們來看ma2的配置

 

從上圖中能夠看出ma2的各項進程都運行正常,虛擬ip遷移,實現了LVS的高可用

 

能夠看出依然實現了負載均衡

如今故障處理完畢啓動ma1的keepalived

虛擬ip又回到ha1了

 

下面咱們來看一下ha1,停掉ha1上httpd服務

查看ma1

咱們能夠看到虛擬IP沒有移動,可是在lvs中的ha1的路由表已經被踢出來了

下面咱們來看ma2

也能夠看到虛擬IP沒有移動過來,可是在lvs中的ha1的路由表已經被踢出來了

如今訪問web

就只能看到ha2的web頁面了

如今啓動ha1的web服務

咱們能夠看到ma1和ma2中ha1的路由表又自動添加進來了

訪問web頁面

驗證完成

總結說明

網絡結構

 

 

集羣說明:

1.使用lvs構成web的負載均衡高可用

2.使用keepalived造成lvs節點的高可用

驗證說明:

1.停掉任意節點的keepalived進程虛擬IP會遷移到另一臺去,而web服務器任然是兩臺進行負載均衡,從而實現lvs節點的高可用

2.停掉任意一臺web服務器,虛擬ip不會遷移,可是在lvs配置會把ha1的路由表踢出去從而實現web服務器的高可用

相關文章
相關標籤/搜索