集羣,簡單地說是指一組(若干個)相互獨立的計算機,利用高速通訊網絡組成一個較大的計算機服務系統,每一個集羣節點(即集羣中的每臺計算機)都是運行各自服務的獨立服務器。html
集羣的特色:高性能(Performance)、價格有效(Cost-effectiveness)、可伸縮性(Scalability)、高可用性(Availability)、透明性(Traansparency)、可管理性(Manageability)、可編程性(Programmability)前端
集羣的分類:mysql
- 負載均衡集羣:Load balancing clusters,簡稱LBC、LB
- 高可用集羣:High-availability clusters,簡稱HAC
- 高性能計算集羣:High-performance clusters,簡稱HPC
- 網格計算集羣:Grid computing clusters
常見的集羣開源軟件:linux
- 高可用: Keepalived、Heartbeat
- 負載均衡:Keepalived、Nginx、LVS、Haproxy
使用keepalived來實現高可用集羣,由於heartbeat在centos6上有一些問題,影響實驗效果,並且heartbeat軟件在2010年中止更新;所以着重講解keepalivednginx
keepalived經過VRRP(Virtual Router Redundancy Protocl)來實現高可用。算法
在這個協議裏會將多臺功能相同的路由器組成一個小組,這個小組裏會有1個master角色和N(N>=1)個backup角色。sql
master會經過組播的形式向各個backup發送VRRP協議的數據包,當backup收不到master發來的VRRP數據包時,就會認爲master宕機了。此時就須要根據各個backup的優先級來決定誰成爲新的mater。編程
Keepalived要有三個模塊,分別是core、check和vrrp。其中core模塊爲keepalived的核心,負責主進程的啓動、維護以及全局配置文件的加載和解析,check模塊負責健康檢查,vrrp模塊是來實現VRRP協議的。vim
實驗準備後端
準備兩臺機器ying01和ying02,ying01做爲master,ying02做爲backup;
兩臺機器都執行yum install -y keepalived;
兩臺機器都安裝nginx,其中ying01上已經編譯安裝過nginx,ying02上需安裝nginx。
在ying02客戶端上,安裝配置ngnix
[root@ying02 src]# scp 192.168.112.136:/usr/local/src/nginx-1.4.7.tar.gz ./ //拷貝源碼包 [root@ying02 src]# tar zxf nginx-1.4.7.tar.gz //解壓 [root@ying02 nginx-1.4.7]# ./configure --prefix=/usr/local/nginx //定製服務 [root@ying02 nginx-1.4.7]# echo $? 0 [root@ying02 nginx-1.4.7]# make //編譯 [root@ying02 nginx-1.4.7]# echo $? 0 [root@ying02 nginx-1.4.7]# make install //安裝 [root@ying02 nginx-1.4.7]# echo $? 0
編輯nginx啓動腳本文件;
[root@ying02 ~]# vim /etc/init.d/nginx //新建啓動腳本,見ying01啓動腳本 [root@ying02 ~]# chmod 755 /etc/init.d/nginx //給予755權限 [root@ying02 ~]# chkconfig --add nginx //增長啓動權限 [root@ying02 ~]# chkconfig nginx on
編輯配置文件
[root@ying02 ~]# cd /usr/local/nginx/conf/ [root@ying02 conf]# ls fastcgi.conf fastcgi_params koi-utf mime.types nginx.conf fastcgi.conf.default fastcgi_params.default koi-win mime.types.default nginx.conf.default [root@ying02 conf]# mv nginx.conf nginx.conf.1 [root@ying02 conf]# vim nginx.conf //與ying01同樣
檢查語法錯誤,開始啓動nginx服務;
[root@ying02 conf]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful [root@ying02 conf]# /etc/init.d/nginx start Starting nginx (via systemctl): [ 肯定 ] [root@ying02 conf]# ps aux |grep nginx root 9393 0.0 0.0 24844 788 ? Ss 12:21 0:00 nginx: master process /usr/loc nobody 9394 0.0 0.1 27148 3360 ? S 12:21 0:00 nginx: worker process nobody 9395 0.0 0.1 27148 3360 ? S 12:21 0:00 nginx: worker process root 9397 0.0 0.0 112720 984 pts/1 R+ 12:21 0:00 grep --color=auto nginx
先安裝keepalived包;並找到其配置文件;
[root@ying01 ~]# yum install -y keepalived [root@ying01 ~]# ls /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
原配置文件清空,並寫入如下配置
[root@ying01 ~]# > /etc/keepalived/keepalived.conf [root@ying01 ~]# cat /etc/keepalived/keepalived.conf [root@ying01 ~]# vim /etc/keepalived/keepalived.conf global_defs { notification_email { txwd188@126.com //定義接收郵件人 } notification_email_from 27623694@qq.com //定義發郵件地址(實際沒有) smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_nginx { script "/usr/local/sbin/check_ng.sh" //此腳本爲監控nginx服務的 interval 3 } vrrp_instance VI_1 { state MASTER interface ens33 //網卡 virtual_router_id 51 priority 100 //權重100,此數值要大於backup advert_int 1 authentication { auth_type PASS auth_pass ying //定義密碼 } virtual_ipaddress { 192.168.112.100 //定義VIP } track_script { chk_nginx //定義監控腳本,這裏和上面vrr_script後面的字符串保持一致 }
在配置文件中,定義了check_ng.sh腳本,如今新建以下腳本;
[root@ying01 ~]# vim /usr/local/sbin/check_ng.sh #!/bin/bash #時間變量,用於記錄日誌 d=`date --date today +%Y%m%d_%H:%M:%S` #計算nginx進程數量 n=`ps -C nginx --no-heading|wc -l` #若是進程爲0,則啓動nginx,而且再次檢測nginx進程數量, #若是還爲0,說明nginx沒法啓動,此時須要關閉keepalived if [ $n -eq "0" ]; then /etc/init.d/nginx start //啓動命令 n2=`ps -C nginx --no-heading|wc -l` if [ $n2 -eq "0" ]; then echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log systemctl stop keepalived fi fi
給該腳本賦予755權限,不然沒法被keepalived調用
[root@ying01 ~]# ls -l /usr/local/sbin/check_ng.sh -rw-r--r-- 1 root root 567 7月 21 10:48 /usr/local/sbin/check_ng.sh [root@ying01 ~]# chmod 755 /usr/local/sbin/check_ng.sh [root@ying01 ~]# ls -l /usr/local/sbin/check_ng.sh -rwxr-xr-x 1 root root 567 7月 21 10:48 /usr/local/sbin/check_ng.sh
開啓keepalived服務,中止防火牆,關閉SElinux
[root@ying01 ~]# systemctl start keepalived [root@ying01 ~]# ps aux |grep keep root 2162 0.1 0.0 118652 1392 ? Ss 10:51 0:00 /usr/sbin/keepalived -D root 2163 0.0 0.1 127516 3340 ? S 10:51 0:00 /usr/sbin/keepalived -D root 2164 0.2 0.1 127456 2844 ? S 10:51 0:00 /usr/sbin/keepalived -D root 2206 0.0 0.0 112720 980 pts/0 S+ 10:51 0:00 grep --color=auto keep [root@ying01 ~]# systemctl stop firewalld [root@ying01 ~]# getenforce Disabled [root@ying01 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
先安裝keepalived包;清空原配置文件,並按如下內容寫入配置文件中
[root@ying02 ~]# yum install -y keepalived [root@ying02 ~]# > /etc/keepalived/keepalived.conf [root@ying02 ~]# cat /etc/keepalived/keepalived.conf [root@ying02 ~]# vim /etc/keepalived/keepalived.conf global_defs { notification_email { txwd1214@126.com } notification_email_from 1276700694@qq.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_nginx { script "/usr/local/sbin/check_ng.sh" interval 3 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 //權重90 advert_int 1 authentication { auth_type PASS auth_pass ying //密碼 } virtual_ipaddress { 192.168.112.100 } track_script { chk_nginx }
在配置文件中,定義了check_ng.sh腳本,如今新建以下腳本;
[root@ying02 ~]# vim /usr/local/sbin/check_ng.sh #時間變量,用於記錄日誌 d=`date --date today +%Y%m%d_%H:%M:%S` #計算nginx進程數量 n=`ps -C nginx --no-heading|wc -l` #若是進程爲0,則啓動nginx,而且再次檢測nginx進程數量, #若是還爲0,說明nginx沒法啓動,此時須要關閉keepalived if [ $n -eq "0" ]; then systemctl start nginx /etc/init.d/nginx start if [ $n2 -eq "0" ]; then echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log systemctl stop keepalived fi fi
給該腳本賦予755權限,不然沒法被keepalived調用
[root@ying02 conf]# ls -l /usr/local/sbin/check_ng.sh -rw-r--r--. 1 root root 542 7月 21 12:25 /usr/local/sbin/check_ng.sh [root@ying02 conf]# chmod 755 /usr/local/sbin/check_ng.sh [root@ying02 conf]# ls -l /usr/local/sbin/check_ng.sh -rwxr-xr-x. 1 root root 542 7月 21 12:25 /usr/local/sbin/check_ng.sh [root@ying02 conf]#
開啓keepalived服務,中止防火牆,關閉SElinux
[root@ying02 conf]# systemctl start keepalived [root@ying02 conf]# ps aux |grep keep root 9429 0.1 0.0 118652 1396 ? Ss 12:26 0:00 /usr/sbin/keepalived -D root 9430 0.0 0.1 127516 3296 ? S 12:26 0:00 /usr/sbin/keepalived -D root 9431 0.0 0.1 127456 2844 ? S 12:26 0:00 /usr/sbin/keepalived -D root 9470 0.0 0.0 112720 980 pts/1 S+ 12:26 0:00 grep --color=auto keep [root@ying02 ~]# getenforce Disabled [root@ying02 ~]# systemctl stop firewalld
如今把個機器梳理如下:
192.168.112.136 此爲master機,ying01
192.168.112.138 此爲backup機,ying02
192.168.112.100 此爲VIP機
用ip add命令查看,此時VIP 192.168.112.100在ying01上;
[root@ying01 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:3f:91 brd ff:ff:ff:ff:ff:ff inet 192.168.112.136/24 brd 192.168.112.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.112.100/32 scope global ens33 //在master ying01上 valid_lft forever preferred_lft forever inet 192.168.112.158/24 brd 192.168.112.255 scope global secondary ens33:0 valid_lft forever preferred_lft forever inet6 fe80::16dc:89c:b761:e115/64 scope link valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:3f:9b brd ff:ff:ff:ff:ff:ff inet6 fe80::ad38:a02e:964e:1b93/64 scope link valid_lft forever preferred_lft forever
而backup機上沒有.mater給客戶端提供服務;
[root@ying02 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:2c:24 brd ff:ff:ff:ff:ff:ff inet 192.168.112.138/24 brd 192.168.112.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::964f:be22:ddf2:54b7/64 scope link valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:2c:2e brd ff:ff:ff:ff:ff:ff
在winows瀏覽器上測試,可以清楚看到提供服務的是master;
master 192.168.112.136的主頁
虛擬IP 192.168.112.100顯示的頁面爲master的頁面
backup 192.168.112.138的頁面
如今但願讓ying01 master宕機; 關閉keepalived服務便可(關閉它,即連帶關閉nginx)。
[root@ying01 ~]# ps aux |grep keep root 2162 0.0 0.0 118652 1392 ? Ss 11:24 0:00 /usr/sbin/keepalived -D root 2163 0.0 0.1 127516 3340 ? S 11:24 0:00 /usr/sbin/keepalived -D root 2164 0.0 0.1 127456 2848 ? S 11:24 0:07 /usr/sbin/keepalived -D root 39627 0.0 0.0 112720 984 pts/1 S+ 16:23 0:00 grep --color=auto keep [root@ying01 ~]# systemctl stop keepalived [root@ying01 ~]# ps aux |grep keep root 39699 0.0 0.0 112720 984 pts/1 R+ 16:23 0:00 grep --color=auto keep
在masters查看VIP,發現不在;
[root@ying01 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:3f:91 brd ff:ff:ff:ff:ff:ff inet 192.168.112.136/24 brd 192.168.112.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.112.158/24 brd 192.168.112.255 scope global secondary ens33:0 valid_lft forever preferred_lft forever inet6 fe80::16dc:89c:b761:e115/64 scope link valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:3f:9b brd ff:ff:ff:ff:ff:ff
在backup上查看,發現VIP已經移到此機上;
[root@ying02 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:2c:24 brd ff:ff:ff:ff:ff:ff inet 192.168.112.138/24 brd 192.168.112.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.112.100/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::964f:be22:ddf2:54b7/64 scope link valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:2c:2e brd ff:ff:ff:ff:ff:ff inet6 fe80::19f6:ebf0:2c32:5b7c/64 scope link valid_lft forever preferred_lft forever
那麼用windouw瀏覽器 觀測,與前面對比;
此時VIP完美轉移到backup上面,不影響用戶使用;
負載均衡軟件分類:
主流的負載均衡開源軟件:LVS、keepalived、haproxy、nginx等
其中,LVS屬於4層(網絡OSI7層模型),nginx屬於7層,haproxy便可以是4層,也能夠是7層。
keepalived的負載均衡功能其實就是lvs
lvs這種4層的負載均衡是能夠分發除80外的其餘端口通訊,好比mysql,而nginx僅僅支持http、https、mail
haproxy也能夠支持mysql
4層和7層負載比較:
LVS4層的更穩定,能承受更多的請求
nginx 7層的更加靈活,能實現更多的個性化須要
LVS:Linux Virtuer Server,即Linux虛擬服務器,是一個虛擬的服務器集羣系統,基於TCP/IP作的路由和轉發,穩定性和效率很高。本項目在1998年5月由章文嵩博士成立,是中國國內最先出現的自由軟件項目之一。
LVS集羣採用IP負載均衡技術和基於內容請求分發技術。調度器具備很好的吞吐率,將請求均衡地轉移到不一樣的服務器上執行,且調度器自動屏蔽掉服務器的故障,從而將一組服務器構成一個高性能的、高可用的虛擬服務器。整個服務器集羣的結構對客戶是透明的,並且無需修改客戶端和服務器端的程序。爲此,在設計時須要考慮系統的透明性、可伸縮性、高可用性和易管理性。
通常來講,LVS集羣採用三層結構
A、負載調度器(load balancer)或者叫分發器(Load Runner),它是整個集羣對外面的前端機,負責將客戶的請求發送到一組服務器上執行,而客戶認爲服務是來自一個IP地址(咱們可稱之爲虛擬IP地址)上的。
B、服務器池(server pool),是一組真正執行客戶請求的服務器,執行的服務有WEB、MAIL、FTP和DNS等。
C、共享存儲(shared storage),它爲服務器池提供一個共享的存儲區,這樣很容易使得服務器池擁有相同的內容,提供相同的服務。
lvs支持的算法有:
輪詢:Round-Robin,簡稱:rr
加權輪詢:Weight Round-Robin,簡稱:wrr
最小鏈接:Least-Connection,簡稱:lc
加權最小鏈接:Weight Least-Connection,簡稱:wlc
基於局部性的最小鏈接:Locality-Based Least Connections,簡稱:lblc
帶複製的基於局部性最小鏈接:Locality-Based Least Connections with Replication,簡稱:lblcr
目標地址散列調度:Destination Hashing,簡稱:dh
源地址散列調度:Source Hashing,簡稱:sh
試驗原理:
LVS NAT模式藉助iptables的nat表來實現:
- 用戶的請求到分發器後,經過預設的iptables規則,把請求的數據包轉發到後端的rs上去
- rs須要設定網關爲分發器的內網ip
- 用戶請求的數據包和返回給用戶的數據包所有通過分發器,因此分發器成爲瓶頸
- 在nat模式中,只須要分發器有公網ip便可,因此比較節省公網ip資源
試驗準備:
三臺機器:
分發器,也叫調度器(簡寫爲dir) 內網:192.168.112.136,外網:192.168.24.128(vmware僅主機模式)
rs1 內網:192.168.112.138,設置網關爲192.168.112.136
rs2 內網:192.168.112.139,設置網關爲192.168.112.136
三臺機器上都執行執行
systemctl stop firewalld; systemc disable firewalld;
systemctl start iptables-services; iptables -F; service iptables save
注意:ying01和ying02機器已經存在,如今須要克隆一臺ying03機器;其IP定位:192.168.112.139。此處不詳細介紹;
分發器須要,兩個網卡,也就是ying01機器上須要兩個網卡;
[root@ying01 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.112.136 netmask 255.255.255.0 broadcast 192.168.112.255 inet6 fe80::16dc:89c:b761:e115 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:87:3f:91 txqueuelen 1000 (Ethernet) RX packets 20512 bytes 6845743 (6.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25704 bytes 4194777 (4.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.112.158 netmask 255.255.255.0 broadcast 192.168.112.255 ether 00:0c:29:87:3f:91 txqueuelen 1000 (Ethernet) ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:87:3f:9b txqueuelen 1000 (Ethernet) RX packets 1335 bytes 456570 (445.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3646 bytes 647124 (631.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 118 bytes 10696 (10.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 118 bytes 10696 (10.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
在ying01機器,vmware上配置僅主機模式;
此時查看ens37網卡的IP爲192.168.24.128;
[root@ying01 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.112.136 netmask 255.255.255.0 broadcast 192.168.112.255 inet6 fe80::16dc:89c:b761:e115 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:87:3f:91 txqueuelen 1000 (Ethernet) RX packets 20749 bytes 6864991 (6.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25824 bytes 4211329 (4.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.112.158 netmask 255.255.255.0 broadcast 192.168.112.255 ether 00:0c:29:87:3f:91 txqueuelen 1000 (Ethernet) ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.24.128 netmask 255.255.255.0 broadcast 192.168.24.255 inet6 fe80::ad38:a02e:964e:1b93 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:87:3f:9b txqueuelen 1000 (Ethernet) RX packets 1360 bytes 464840 (453.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3670 bytes 651388 (636.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 118 bytes 10696 (10.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 118 bytes 10696 (10.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看網卡網關
[root@ying01 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.112.2 0.0.0.0 UG 100 0 0 ens33 192.168.24.0 0.0.0.0 255.255.255.0 U 100 0 0 ens37 192.168.112.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
保存清空的規則
[root@ying01 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 肯定 ] [root@ying01 ~]# iptables -nvL Chain INPUT (policy ACCEPT 8531 packets, 6290K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 22987 packets, 1814K bytes) pkts bytes target prot opt in out source destination
中止防火牆;
[root@ying02 ~]# systemctl stop firewalld [root@ying02 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
安裝iptables-services服務;
[root@ying02 ~]# yum list |grep iptables-service iptables-services.x86_64 1.4.21-24.1.el7_5 updates [root@ying02 ~]# yum install -y iptables-services
開啓iptables服務,清空規則後,保存規則;
[root@ying02 ~]# systemctl enable iptables Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service. [root@ying02 ~]# systemctl start iptables [root@ying02 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 20 1468 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 12 packets, 1680 bytes) pkts bytes target prot opt in out source destination [root@ying02 ~]# iptables -F [root@ying02 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 肯定 ]
此時把ying02上的IP網關改成192.168.112.136
[root@ying02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 IPADDR=192.168.112.138 NETMASK=255.255.255.0 GATEWAY=192.168.112.136 //更改成136 DNS1=119.29.29.29
重啓網絡服務,查看其網關
[root@ying02 ~]# systemctl restart network [root@ying02 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.112.136 0.0.0.0 UG 100 0 0 ens33 192.168.112.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
安裝iptables-services服務;
[root@ying03 ~]# yum list |grep iptables-service iptables-services.x86_64 1.4.21-24.1.el7_5 updates [root@ying03 ~]# yum install -y iptables-service
開啓iptables服務,清空規則後,保存規則;
[root@ying03 ~]# iptables -nvL Chain INPUT (policy ACCEPT 499 packets, 531K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 471 packets, 49395 bytes) pkts bytes target prot opt in out source destination [root@ying03 ~]# [root@ying03 ~]# systemctl enable iptables Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service. [root@ying03 ~]# systemctl start iptables [root@ying03 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 8 576 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 5 packets, 684 bytes) pkts bytes target prot opt in out source destination [root@ying03 ~]# iptables -F [root@ying03 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 肯定 ]
此時把ying02上的IP網關改成192.168.112.136
[root@ying03 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 IPADDR=192.168.112.139 NETMASK=255.255.255.0 GATEWAY=192.168.112.136 //更改成136 DNS1=119.29.29.29
重啓網絡服務,查看其網關
[root@ying03 ~]# systemctl restart network [root@ying03 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.112.136 0.0.0.0 UG 100 0 0 ens33 192.168.112.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
安裝ipvsadm包
[root@ying01 ~]# yum install -y ipvsadm
編輯lvs_nat.sh腳本
[root@ying01 ~]# vim /usr/local/sbin/lvs_nat.sh #! /bin/bash # director 服務器上開啓路由轉發功能 echo 1 > /proc/sys/net/ipv4/ip_forward # 關閉icmp的重定向 echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects # 注意區分網卡名字,ying01機器兩個網卡分別爲ens33和ens37 echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects # director 設置nat防火牆 iptables -t nat -F iptables -t nat -X iptables -t nat -A POSTROUTING -s 192.168.112.0/24 -j MASQUERADE # director設置ipvsadm IPVSADM='/usr/sbin/ipvsadm' $IPVSADM -C $IPVSADM -A -t 192.168.24.128:80 -s wlc -p 3 $IPVSADM -a -t 192.168.24.128:80 -r 192.168.112.138:80 -m -w 1 $IPVSADM -a -t 192.168.24.128:80 -r 192.168.112.139:80 -m -w 1
ying02上開啓nginx,把其主頁從新定義;
[root@ying02 ~]# ps aux |grep nginx root 1028 0.0 0.0 24844 780 ? Ss 22:13 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf nobody 1029 0.0 0.1 27148 3356 ? S 22:13 0:00 nginx: worker process nobody 1030 0.0 0.1 27148 3356 ? S 22:13 0:00 nginx: worker process root 1576 0.0 0.0 112724 984 pts/0 S+ 23:10 0:00 grep --color=auto nginx [root@ying02 ~]# echo 'ying02 192.168.112.138' > /usr/local/nginx/html/index.html [root@ying02 ~]# curl localhost ying02 192.168.112.138 //ying02上的 網頁內容
ying03同ying02同樣,開啓nginx服務,並從新定義網頁內容
[root@ying03 ~]# ps aux |grep nginx root 1056 0.0 0.0 24844 788 ? Ss 22:14 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf nobody 1058 0.0 0.1 27148 3360 ? S 22:14 0:00 nginx: worker process nobody 1059 0.0 0.1 27148 3360 ? S 22:14 0:00 nginx: worker process root 1612 0.0 0.0 112720 980 pts/0 R+ 23:23 0:00 grep --color=auto nginx [root@ying03 ~]# echo 'ying03 192.168.112.139' > /usr/local/nginx/html/index.html [root@ying03 ~]# curl localhost ying03 192.168.112.139 //定義的網頁內容;
執行腳本,查看nat規則,發現有網段出現;
[root@ying01 ~]# sh /usr/local/sbin/lvs_nat.sh [root@ying01 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 264 packets, 26088 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 3 packets, 236 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1608 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 1 packets, 328 bytes) pkts bytes target prot opt in out source destination 158 11991 MASQUERADE all -- * * 192.168.112.0/24 0.0.0.0/0
測試192.168.24.128的主頁,發現顯示單位爲ying02上的主頁;
[root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138 [root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138 [root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138
進入腳本,把延遲時間去掉;
[root@ying01 ~]# vim /usr/local/sbin/lvs_nat.sh $IPVSADM -A -t 192.168.24.128:80 -s wlc //把延遲3s去掉
從新執行腳本,此時每測試一次,顯示的主頁爲ying0二、ying03;很均衡的顯示;
[root@ying01 ~]# sh /usr/local/sbin/lvs_nat.sh [root@ying01 ~]# curl 192.168.24.128 ying03 192.168.112.139 [root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138 [root@ying01 ~]# curl 192.168.24.128 ying03 192.168.112.139 [root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138 [root@ying01 ~]# curl 192.168.24.128 ying03 192.168.112.139 [root@ying01 ~]# curl 192.168.24.128 ying02 192.168.112.138
ipvsadm -ln 查看其規則
[root@ying01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.24.128:80 wlc -> 192.168.112.138:80 Masq 1 0 3 -> 192.168.112.139:80 Masq 1 0 3 [root@ying01 ~]# curl 192.168.24.128 ying03 192.168.112.139 [root@ying01 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.24.128:80 wlc -> 192.168.112.138:80 Masq 1 0 3 -> 192.168.112.139:80 Masq 1 0 4