對於小企業比較適合NAT模式,由於比較節省公網IP資源,可是對於DR模式來講,假若有幾十臺上百臺服務器,這樣就會很浪費公網IP資源,尤爲是如今公網IP資源愈來愈少,還有一種方式是搭建一個內部lvs,所有都用內網,包括vip,外部只用一個公網IP作一個映射就好了,公網的80端口映射到內網的vip的80端口html
三臺機器nginx
#! /bin/bash #打開端口轉發 echo 1 > /proc/sys/net/ipv4/ip_forward ipv=/usr/sbin/ipvsadm vip=192.168.75.200 rs1=192.168.75.134 rs2=192.168.75.130 #注意這裏的網卡名字,虛擬網卡設置爲ens33;2,首先能夠重啓一下ens33這個網卡,後續就能夠不用再次執行綁定命令了 ifdown ens33 ifup ens33 ifconfig ens33:2 $vip broadcast $vip netmask 255.255.255.255 up #添加網關 route add -host $vip dev ens33:2 #清空規則 $ipv -C $ipv -A -t $vip:80 -s wrr #這裏的-g 表示gateway,指的就是dr模式 $ipv -a -t $vip:80 -r $rs1:80 -g -w 1 $ipv -a -t $vip:80 -r $rs2:80 -g -w 1
隨後咱們來執行一下這個腳本算法
[root@lijie-01 ~]# sh !$ sh /usr/local/sbin/lvs_dr.sh 成功斷開設備 'ens33'。 鏈接已成功激活(D-Bus 活動路徑:/org/freedesktop/NetworkManager/ActiveConnection/10) [root@lijie-01 ~]#
咱們來看下route上已經有75.200vim
[root@lijie-01 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:21:5e:c0 brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.75.200/32 brd 192.168.75.200 scope global ens33:2 valid_lft forever preferred_lft forever inet 192.168.75.150/24 brd 192.168.75.255 scope global secondary ens33:0 valid_lft forever preferred_lft forever inet6 fe80::8ace:f0ca:bb6e:d1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::d652:b567:6190:8f28/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:21:5e:ca brd ff:ff:ff:ff:ff:ff [root@lijie-01 ~]#
#/bin/bash vip=192.168.75.200 #把vip綁定在lo上,是爲了實現rs直接把結果返回給客戶端 ifdown lo ifup lo ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up route add -host $vip lo:0 #如下操做爲更改arp內核參數,目的是爲了讓rs順利發送mac地址給客戶端 #參考文檔www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
而後咱們來執行這個腳本瀏覽器
[root@lijie-02 ~]# sh !$ sh /usr/local/sbin/lvs_rs.sh [root@lijie-02 ~]#
咱們再來看下route裏面已經有了75.200bash
[root@lijie-03 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.75.2 0.0.0.0 UG 100 0 0 ens33 192.168.75.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33 192.168.75.200 0.0.0.0 255.255.255.255 UH 0 0 0 lo [root@lijie-03 ~]#
[root@lijie-01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.75.200:80 wrr -> 192.168.75.130:80 Route 1 1 0 -> 192.168.75.134:80 Route 1 0 0 [root@lijie-01 ~]#
再或者新建立一個虛擬機來curl服務器
lvs有一個瓶頸是分發器,若是分發器宕機的話整個服務都會不可用,因此給分發器作一個高可用就能夠可很好地解決這個問題,而前面說到的keepalived就是作高可用的工具,而且它也內置了lvs也有負載均衡的做用,還有一個使用使用這種方案的緣由是若是咱們在使用lvs時沒有其餘額外操做的時候,我把其中一個rs停機,就會出現分發器仍然把請求轉發到已經停機的rs上,而keepalived的出現就是爲了解決這個問題
keepalived+lvs DR的配置 完整架構須要兩臺服務器(角色爲dir)分別安裝keepalived軟件,目的是實現高可用,但keepalived自己也有負載均衡的功能,因此本次實驗能夠只安裝一臺keepalived
keepalived內置了ipvsadm的功能,因此不須要再安裝ipvsadm包,也不用編寫和執行那個lvs_dir的腳本 三臺機器分別爲:架構
vrrp_instance VI_1 { #備用服務器上爲 BACKUP state MASTER #綁定vip的網卡爲ens33,你的網卡和阿銘的可能不同,這裏須要你改一下 virtual_router_id 51 #備用服務器上爲90 priority 100 advert_int 1 } virtual_ipaddress { 192.168.75.200 } } virtual_server 192.168.75.200 80 { #(每隔10秒查詢realserver狀態) delay_loop 10 #(lvs 算法) lb_algo wlc #(DR模式) lb_kind DR #(同一IP的鏈接60秒內被分配到同一臺realserver) persistence_timeout 60 #(用TCP協議檢查realserver狀態) protocol TCP real_server 192.168.75.134 80 { #(權重) weight 100 TCP_CHECK { #(10秒無響應超時) connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.75.130 80 { weight 100 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
[root@lijie-01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.75.200:80 wrr -> 192.168.75.130:80 Route 1 0 0 -> 192.168.75.134:80 Route 1 0 0 [root@lijie-01 ~]# systemctl stop keepalived [root@lijie-01 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:21:5e:c0 brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.75.200/32 brd 192.168.75.200 scope global ens33:2 valid_lft forever preferred_lft forever inet 192.168.75.150/24 brd 192.168.75.255 scope global secondary ens33:0 valid_lft forever preferred_lft forever inet6 fe80::8ace:f0ca:bb6e:d1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::d652:b567:6190:8f28/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:21:5e:ca brd ff:ff:ff:ff:ff:ff [root@lijie-01 ~]# [root@lijie-01 ~]# ipvsadm -C [root@lijie-01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@lijie-01 ~]# [root@lijie-01 ~]# systemctl start keepalived [root@lijie-01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.75.200:80 wlc persistent 60 -> 192.168.75.130:80 Route 100 0 0 -> 192.168.75.134:80 Route 100 0 0 [root@lijie-01 ~]#
擴展
haproxy+keepalived http://blog.csdn.net/xrt95050/article/details/40926255
nginx、lvs、haproxy比較 http://www.csdn.net/article/2014-07-24/2820837
keepalived中自定義腳本 vrrp_script http://my.oschina.net/hncscwc/blog/158746
lvs dr模式只使用一個公網ip的實現方法 http://storysky.blog.51cto.com/628458/338726負載均衡