nginx負載均衡配置實戰php
web02和web01作一樣的操做,nginx配置文件以下:css
[root@web01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } access_log logs/access.log main; } } 接下來建立站點目錄及對應測試文件並把域名加入到hosts解析並進行測試
LB01 nginx配置文件以下:html
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream www_server_pools { server 192.168.100.107:80 weight=1; server 192.168.100.108:80 weight=1; } server { listen 80; server_name www.dmtest.com; location / { proxy_pass http://www_server_pools; } } }
以後配置hosts解析到代理的IP或VIP上,從新加載服務便可java
[root@lb01 conf]# tail -1 /etc/hosts 192.168.100.105 www.dmtest.com [root@lb01 conf]# systemctl restart nginx [root@lb01 conf]# curl www.dmtest.com 192.168.100.107 [root@lb01 conf]# curl www.dmtest.com 192.168.100.108
反向代理虛擬主機節點服務器案例mysql
在代理向後端服務器發送的http請求頭中加入host字段信息後,若後端服務器配置有多個虛擬主機,他就能夠識別代理的是哪一個虛擬主機。這是節點服務器多虛擬主機時的關鍵配置,整個nginx代理配置爲:android
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream www_server_pools { server 192.168.100.107:80 weight=1; server 192.168.100.108:80 weight=1; } server { listen 80; server_name www.dmtest.com; location / { proxy_pass http://www_server_pools; proxy_set_header Host $host; #在代理向後端服務器發送的http請求頭中加入host字段信息後,若後端服務器配置有多個虛擬主機,他就能夠識別代理的是哪一個虛擬主機。這是節點服務器多虛擬主機時的關鍵配置. } } }
通過反向代理後的節點服務器記錄用戶IP案例nginx
在反向代理中節點服務器對站點的訪問日誌的第一個字段記錄的並非客戶端的IP,而是反向代理服務器的IP,最後一個字段也是"-",日誌以下:git
[root@web01 conf]# tail -2 ../logs/access.log 192.168.100.105 - - [14/Sep/2018:13:41:02 +0800] "GET / HTTP/1.0" 200 16 "-" "curl/7.29.0" "-" 192.168.100.105 - - [16/Sep/2018:13:57:45 +0800] "GET / HTTP/1.0" 200 16 "-" "curl/7.29.0" "-"
在反向代理請求後端服務器節點的請求頭中增長獲取的客戶端IP的字段信息,而後節點後端能夠經過程序或相關的配置接受X-Forwarded-For傳過來的用戶真實IP信息。github
在LB01上配置以下:web
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream www_server_pools { server 192.168.100.107:80 weight=1; server 192.168.100.108:80 weight=1; } server { listen 80; server_name www.dmtest.com; location / { proxy_pass http://www_server_pools; proxy_set_header X-Forwarded-For $remote_addr; #在代理向後端服務器發送的http請求頭中加入X-Forwarded-For字段信息,用於後端服務器程序、日誌等接收記錄真實用戶的IP,而不是代理服務器上的IP; } } }
注意,節點服務器上須要的讓問日誌,若是要記錄用戶的真實IP,還必須進行日誌格式配置,這樣才能把代理傳過來的X-Forwarded_For頭信息記錄下來,具體配置爲:
在web01 上操做
[root@web01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #"$http_x_forwarded_for",若是但願在第一行顯示,能夠替換掉第一行的'$remote_addr變量 server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } access_log logs/access.log main; } }
再次查看站點日誌效果以下:
[root@web01 conf]# tail -5 ../logs/access.log 192.168.100.105 - - [16/Sep/2018:14:24:28 +0800] "GET / HTTP/1.0" 200 16 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "192.168.100.1" 192.168.100.105 - - [16/Sep/2018:14:24:30 +0800] "GET / HTTP/1.0" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "192.168.100.1" 192.168.100.105 - - [16/Sep/2018:14:24:30 +0800] "GET / HTTP/1.0" 200 16 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "192.168.100.1" 192.168.100.105 - - [16/Sep/2018:14:24:31 +0800] "GET / HTTP/1.0" 200 16 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "192.168.100.1" 192.168.100.105 - - [16/Sep/2018:14:24:32 +0800] "GET / HTTP/1.0" 200 16 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "192.168.100.1"
nginx反向代理相關重要基礎參數以下:
proxy_pass http://blog_server_pools; 用於指定反向代理的服務器池。
proxy_set_header Host $host; 當後端Web服務器上也配置有多個虛擬主機時,須要用該 Header來區分反向代理哪一個主機名。
proxy_ set_ header X-Forwarded-For $remote_addr; 若是後端Web服務器上的程序須要獲取用戶P,從該Heard頭獲取。
也能夠把這些代理參數單獨保存在一個文件中,經過include proxy.conf引入進來。
vim proxy.conf proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;
在nginx主配置文件中引入
vim nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream web_pools { server 10.0.0.7:80 weight=5 max_fails=10 fail_timeout=10s; server 10.0.0.8:80 weight=5; #server 10.0.0.10:80 weight=5 backup; } server { listen 80; server_name www.dmtest.com; location / { root html; index index.html index.htm; proxy_pass http://web_pools; } include proxy.conf; } }
說明:經過nginx實現動靜分離,即經過nginx反向代理配置規則實現讓動態資源和靜態資源及其餘業務分別由不一樣的服務器解析,以解決網站性能、安全、用戶體驗等重要問題。
需求:
當用戶請求www.dmtets.com/upload/xx 地址時,實現由upload上傳服務器池處理請求。
當用戶請求www.dmtest.com/static/xx地址時,實現由靜態服務器池處理請求。
除此之外,對於其餘訪問請求,所有由默認的動態服務器池處理請求。
瞭解需求後,就能夠進行upstream模塊服務器池的配置了。static_pools爲靜態服務器池,有一個服務器,地址爲192.168.100.107,端口爲80
upstream static_pools { server 192.168.100.107:80 weight=1; }
upload_pools爲上傳服務器池,有一個服務器,地址爲192.168.10.108,端口爲80
upstream upload_pools { server 192.168.100.108:80 weight=1; }
default_pools 爲默認的服務器池,即動態服務器池,有一個服務器,地址爲192.168.100.109,端口爲80
upstream default_pools { server 192.168.100.109:80 weight=1; }
下面利用location或if語句把不一樣的URI(路徑)請求,分給不一樣的服務器池處理,具體配置以下:
方案1:以location方案實現
將符合static的請求交給靜態服務器池static_pools,配置以下:
location /static/ { proxy_pass http://static_pools; include proxy.conf }
將符合upload的請求交給上傳服務器池upload_pools,配置以下:
location /upload/ { proxy_pass http://upload_pools; include proxy.conf }
不符合上述規則的請求,默認所有交給動態服務器池default_pools,配置以下:
location / { proxy_pass http://default_pools; include proxy.conf }
方案2:以if語句實現
if ($request_uri ~* "^/static/(.*)$") { proxy_pass http://static_pools/$1; } if ($request_uri ~* "^/upload/(.*)$") { proxy_pass http://upload_pools/$1; } location / { proxy_pass http://default_polls; include proxy.conf; }
方案1,nginx反向代理完整配置以下:
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream static_pools { server 192.168.100.107:80 weight=1; } upstream upload_pools { server 192.168.100.108:80 weight=1; } upstream default_pools { server 192.168.100.109:80 weight=1; } server { listen 80; server_name www.dmtest.com; location / { proxy_pass http://default_pools; include proxy_pass; } location /static/ { proxy_pass http://static_pools; include proxy_pass; } location /upload/ { proxy_pass http://upload_pools; include proxy_pass; } } }
方案二完整配置以下:
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream static_pools { server 192.168.100.107:80 weight=1; } upstream upload_pools { server 192.168.100.108:80 weight=1; } upstream default_pools { server 192.168.100.109:80 weight=1; } server { listen 80; server_name www.dmtest.com; if ($request_uri ~* "^/static/(.*)$") { proxy_pass http://static_pools/$1; include proxy.conf; } if ($request_uri ~* "^/upload/(.*)$") { proxy_pass http://upload_pools/$1; include proxy.conf; } location / { proxy_pass http://default_polls; include proxy.conf; } } }
proxy.conf配置以下:
vim proxy.conf proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;
在3臺web服務器上作測試配置
web01 爲static靜態服務,地址端口爲:192.168.100.107:80,配置以下:
cd /application/nginx/html/ mkdir static echo static_pools >sttic/index.html curl http://www.dmtest.com/static/index.html
web02 爲upload上傳服務,地址端口爲:192.168.100.108:80,配置以下:
cd /application/nginx/html/ mkdir upload echo upload_pools >sttic/index.html curl http://www.dmtest.com/upload/index.html
web03 爲動態服務器節點,地址端口爲:192.168.100.109:80,配置以下:
cd /application/nginx/html/ mkdir default echo default_pools >sttic/index.html curl http://www.dmtest.com/deault/index.html
根據客戶端的設備(user_agent)轉發實踐需求
根據計算機客戶端瀏覽器的不一樣設置對應的匹配規則以下:
location / { if ($http_user_agent ~* "MSIE") #若是請求的瀏覽器爲IE(MSIE),則讓請求有static_pools池處理; { proxy_pass http://static_pools; } if ($http_user_agent ~* "Chrome") #若是請求的爲谷歌瀏覽器(Chrome),則讓請求有upload_pools池處理; { proxy_pass http://upload_pools; } proxy_pass http://default_pools; #其餘客戶端,有default_pools處理; include proxy.conf; }
完整的配置文件以下:
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream static_pools { server 192.168.100.107:80 weight=1; } upstream upload_pools { server 192.168.100.108:80 weight=1; } upstream default_pools { server 192.168.100.109:80 weight=1; } server { listen 80; server_name www.dmtest.com; location / { if ($http_user_agent ~* "MSIE") #若是請求的瀏覽器爲IE(MSIE),則讓請求有static_pools池處理; { proxy_pass http://static_pools; } if ($http_user_agent ~* "Chrome") #若是請求的爲谷歌瀏覽器(Chrome),則讓請求有upload_pools池處理; { proxy_pass http://upload_pools; } proxy_pass http://default_pools; #其餘客戶端,有default_pools處理; include proxy.conf; } access_log off; } }
除了針對瀏覽器外,上述"$http_user_agent"變量也能夠針對移動端,好比安卓、蘋果、Ipad設備進行匹配,去請求指定的服務器,具體配置以下:
location / { if ($http_user_agent ~* "android") { proxy_pass http://android_pools; #這個是Android服務器池,須要提早定義upstream } if ($http_user_agent ~* "iphone") #這個是IPhone服務器池,須要提早定義upstream { proxy_pass http://iphone_pools; } proxy_pass http://pc_pools; include extra/proxy.conf; }
除了根據URI路徑及user_agent轉發外,還能夠實現根據文件擴展名進行轉發
相關server配置
location方法的匹配規則以下:
location ~ .*.(gif|jpg|jpeg|png|bmp|swf|css|js)$ { prooxy_pass http://static_pools; include proxy.conf; }
下面是if語句方法的匹配規則:
if ($request_uri ~* ".*\.(php|php5)$") { proxy_pass http://php_server_pools; } if ($request_uri ~* ".*\.(jsp|jsp*|do|do*)$") { proxy_pass http://java_server_pools; }
根據擴展名轉發的應用場景
能夠根據擴展名實現資源動靜分離訪問,如圖片、視頻等請求靜態服務器池,PHP、JSP等請求動態服務器池,示例以下:
location ~ .*.(gif|jpg|jpeg|png|bmp|swf|css|js)$ { prooxy_pass http://static_pools; include proxy.conf; } location ~ .*.(php|php3|php5)$ { proxy_pass http://dynameic_pools; include proxy.conf; }
淘寶技術團隊開發了一個 Tengine( Nginx的分支)模塊 nginx_upstream_check_module,用於提供主動式後端服務器健康檢査。經過它能夠檢測後端 realserver的健康狀態,若是後端 realserver不可用,則全部的請求就不會轉發到該節點上。
Tengine原生支持這個模塊,而 Nginx則須要經過打補丁的方式將該模塊添加到Nginx中。補丁下載地址:https:/github.com/yaoweibin/nginx_upstream_check_module下面介紹如何使用這個模塊。
upstream_check_module介紹:
該模塊能夠爲Tengine提供主動式後端服務器健康檢查的功能。 該模塊在Tengine-1.4.0版本之前沒有默認開啓,它能夠在配置編譯選項的時候開啓:./configure--with-http_upstream_check_module upstream_check_module官方文檔 http://tengine.taobao.org/document_cn/http_upstream_check_cn.html upstream_check_module下載地址 https://github.com/yaoweibin/nginx_upstream_check_module
給nginx打上補丁的安裝
[root@lb01 ~]# cd /home/dm/tools/ wget https://github.com/yaoweibin/nginx_upstream_check_module/archive/master.zip [root@lb01 tools]# unzip master.zip [root@lb01 tools]# cd nginx-1.8.1/ [root@lb01 nginx-1.8.1]# yum install -y patch [root@lb01 nginx-1.8.1]# patch -p1 < ../nginx_upstream_check_module-master/check_1.7.2+.patch patching file src/http/modules/ngx_http_upstream_ip_hash_module.c patching file src/http/modules/ngx_http_upstream_least_conn_module.c patching file src/http/ngx_http_upstream_round_robin.c Hunk #1 succeeded at 9 with fuzz 2. Hunk #2 FAILED at 88. Hunk #3 FAILED at 142. Hunk #4 FAILED at 199. Hunk #5 FAILED at 305. Hunk #6 FAILED at 345. Hunk #7 succeeded at 418 (offset 16 lines). Hunk #8 succeeded at 516 (offset 9 lines). 5 out of 8 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej patching file src/http/ngx_http_upstream_round_robin.h Hunk #1 succeeded at 31 (offset 1 line). [root@lb01 nginx-1.8.1]# ./configure --user=www --group=www --prefix=/application/nginx-1.8.1 --with-http_stub_status_module --with-http_ssl_module --with-http_stub_status_module --add-module=../nginx_upstream_check_module-master/ [root@lb01 nginx-1.8.1]# make [root@lb01 nginx-1.8.1]# mv /application/nginx/sbin/nginx{,.ori} [root@lb01 nginx-1.8.1]# cp ./objs/nginx /application/nginx/sbin/ [root@lb01 nginx-1.8.1]# /application/nginx/sbin/nginx -t nginx: the configuration file /application/nginx-1.8.1/conf/nginx.conf syntax is ok nginx: configuration file /application/nginx-1.8.1/conf/nginx.conf test is successful [root@lb01 nginx-1.8.1]# /application/nginx/sbin/nginx -V nginx version: nginx/1.8.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --user=www --group=www --prefix=/application/nginx-1.8.1 --with-http_stub_status_module --with-http_ssl_module --with-http_stub_status_module --add-module=../nginx_upstream_check_module-master/
patch參數說明:
-p0 選項要從當前目錄查找目的文件(夾) -p1 選項要忽略掉第一層目錄,從當前目錄開始查找。 在這裏以實例說明: old/modules/pcitable 若是使用參數-p0,那就表示從當前目錄找一個叫作old的文件夾,在它下面尋找modules下的pcitable文件來執行patch操做。 若是使用參數-p1,那就表示忽略第一層目錄(即無論old),從當前目錄尋找modules的文件夾,在它下面找pcitable。這樣的前提是當前目錄必須爲modules所在的目錄。 upstream_check_module語法 Syntax: checkinterval=milliseconds [fall=count] [rise=count] [timeout=milliseconds][default_down=true|false] [type=tcp|http|ssl_hello|mysql|ajp] [port=check_port] Default: 若是沒有配置參數,默認值是:interval=30000 fall=5rise=2 timeout=1000 default_down=true type=tcp Context: upstream 指令後面的參數意義是: interval:向後端發送的健康檢查包的間隔。單位是毫秒。 fall(fall_count): 若是連續失敗次數達到fall_count,服務器就被認爲是down。 rise(rise_count): 若是連續成功次數達到rise_count,服務器就被認爲是up。 timeout: 後端健康請求的超時時間。單位是毫秒。 default_down: 設定初始時服務器的狀態,若是是true,就說明默認是down的,若是是false,就是up的。默認值是true,也就是一開始服務器認爲是不可用,要等健康檢查包達到必定成功次數之後纔會被認爲是健康的。 type:健康檢查包的類型,如今支持如下多種類型 tcp:簡單的tcp鏈接,若是鏈接成功,就說明後端正常。 ssl_hello:發送一個初始的SSL hello包並接受服務器的SSL hello包。 http:發送HTTP請求,經過後端的回覆包的狀態來判斷後端是否存活。 mysql: 向mysql服務器鏈接,經過接收服務器的greeting包來判斷後端是否存活。 ajp:向後端發送AJP協議的Cping包,經過接收Cpong包來判斷後端是否存活。 port: 指定後端服務器的檢查端口。你能夠指定不一樣於真實服務的後端服務器的端口,好比後端提供的是443端口的應用,你能夠去檢查80端口的狀態來判斷後端健康情況。默認是0,表示跟後端server提供真實服務的端口同樣。該選項出現於Tengine-1.4.0。
配置nginx健康檢查以下:
[root@lb01 conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream www_server_pools { server 192.168.100.107:80 weight=1; server 192.168.100.108:80 weight=1; check interval=3000 rise=2 fall=5 timeout=1000 type=http; check_http_send "HEAD /HTTP/1.0\r\n\r\n"; check_http_expect_alive http_2xxhttp_3xx; } server { listen 80; server_name www.dmtest.com; location / { proxy_pass http://www_server_pools; proxy_set_header X-Forwarded-For $remote_addr; } location /status { check_status; access_log off; allow SOME.IP.ADD.RESS; deny all; } } }
上面的配置意思是對www_server_pools這個負載均衡條目中的全部節點,每隔3秒監測一次請求2次正常則標記realserver狀態爲up,若是檢測5次都失敗,則標記realserver的狀態爲down,超時時間爲1秒,檢查的協議是HTTP。
結果以下圖:
當nginx接收後端服務器proxy_next_upstream參數定義的狀態碼時,會將這個請求轉發給正常工做的後端服務器,例如500、50二、50三、504,此參數能夠提高用戶的訪問體驗,具體配置以下:
server { listen 80; server_name www.dmtest.com location / { proxy_pass http://www_server_pools; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; include proxy.conf; } }