ginx hmux與resin(3.0版本) session sticky的結合javascript
下載:
svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only
(暫時不要從download中下載session stick代碼,有bug)
wget http://nginx-hmux-module.googlecode.com/files/nginx_hmux_module_v0.2.tar.gz
打補丁
patch -p1 <hmux/hmux.patch
patch -p0 <nginx-upstream-jvm-route-read-only/jvm_route.patch
./configure --add-module=hmux/ --add-module=/home/wangbin/work/memcached/keepalive/
--add-module=nginx-upstream-jvm-route-read-only/ --with-debug
在obj/Makefile中去掉優化參數
make
make install
修改resin.conf(每臺機器都要設置)
實例a:
<http server-id="a" host="61.135.250.217" port="18080"/>
<cluster>
<srun server-id="a" host="61.135.250.217" port="6800"/>
<srun server-id="b" host="61.135.250.217" port="6801"/>
</cluster>
實例b:
<http server-id="b" host="61.135.250.217" port="18081"/>
<cluster>
<srun server-id="a" host="61.135.250.217" port="6800"/>
<srun server-id="b" host="61.135.250.217" port="6801"/>
</cluster>
啓動resin:
sh httpd.sh -server a start
sh httpd.sh -server b start
修改nginx.conf
upstream resins{
server 61.135.250.217:6800 srun_id=a;
server 61.135.250.217:6801 srun_id=b;
jvm_route $cookie_JSESSIONID;
keepalive 1024;
}
server {
location /{
hmux_pass resins;
}
}
啓動nginxcss
hmux_session_sticky html
http://www.blogjava.net/gentoo1439/archive/2007/07/11/129527.html前端
選取Apache HTTP Server做爲前端的負載服務器,後端選取兩個Tomcat做集羣,這次選擇的配置方式爲Session Sticky(粘性Session),這種方式將同一用戶的請求轉發到特定的Tomcat服務器上,避免了集羣中Session的複製,缺點是用戶只跟一種的一臺服務器通訊,若是此服務器down掉,那就廢了。
採用的model爲mod_proxy_ajp.so,整個配置在tomcat的配置文件中都有相關的註釋,只需做相應修改就OK。
咱們選取的是Apache HTTP Server2.2.4,Tomcat5.5.16。
首先安裝Apache HTTP Server,而後修改其配置文件http.conf,首先load三個model,代碼以下:java
LoadModule proxy_module modules
/
mod_proxy.so
LoadModule proxy_ajp_module modules
/
mod_proxy_ajp.so
LoadModule proxy_balancer_module modules
/
mod_proxy_balancer.so
而後在此配置文件末端加入如下代碼:
ProxyPass / balancer://tomcatcluster/ lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3
ProxyPa***everse / balancer://tomcatcluster/
<Proxy balancer://tomcatcluster>
BalancerMember ajp://localhost:8009 route=a
BalancerMember ajp://localhost:9009 route=b
</Proxy>
以上代碼配置了Proxy的相關參數,<Proxy>模塊定義了均衡負載的配置,其中兩個Tomcat Server都配置在同一臺服務器上,端口分別爲800九、9009,並配置各自的route,這樣Apache Server就能根據route將請求轉發給特定的Tomcat。
接下來修改Tomcat的server.xml文件,以下:
<!--
Define an AJP 1.3 Connector on port 8009
-->
<
Connector
port
="8009"
enableLookups
="false"
redirectPort
="8443"
protocol
="AJP/1.3"
/>
其中的port爲前面<Proxy>中設定的端口,還要配置其route,代碼以下:
<!--
Define the top level container in our container hierarchy
-->
<
Engine
name
="Catalina"
defaultHost
="localhost"
jvmRoute
="a"
>
jvmRoute也須同前面的設置同樣。
下面用JMeter對配置後的負載均衡作一測試,首先先啓動兩個Tomcat Server,隨後啓動Apache Server,在JMeter中新建測試計劃,在兩個Tomcat Server中的jsp-examples下新建test.jsp(此jsp本身隨便寫兩句就成),而後進行測試,如下是部分取樣器結果:
HTTP response headers:
HTTP/1.1 200 OK
Date: Wed, 11 Jul 2007 02:17:55 GMT
Set-Cookie:
JSESSIONID=AC7EF1CAA8C6B0FEB68E77D7D375E2AF.b; Path=/jsp-examples
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 3
Keep-Alive: timeout=5, max=79
Connection: Keep-Alive
以上紅色代碼表示用戶的http請求中的JSESSIONID中已經附帶了route後綴,.b表示此請求將轉發到route爲b的Tomcat Server上,你將會發現其中的一部分請求的JSESSIONID後綴爲.a,也就是轉發給route爲a的Tomcat Server上
-------------------------------------node
http://blog.sina.com.cn/s/blog_5dc960cd0100ipgt.htmlnginx
nginx+resin(tomcat) session 問題解決
(2010-05-19 09:34:07)
轉自:http://deidara.blog.51cto.com/400447/193887
在web服務器中須要中修改配置:
resion中:
shell $> vim resin.conf
##
查找
<http address="*" port="8080"/>
## 註釋掉 <!--http address="*" port="8080"/-->
## 查找
<server id="" address="127.0.0.1" port="6800">
## 替換成
<server id="a" address="192.168.6.121" port="6800">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8080"/>
</server>
<server id="b" address="192.168.6.121" port="6801">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8081"/>
</server>
tomcat中:(通過試驗確認,虛擬主機也支持,只需按下面修改一次便可)
設置tomcat的server.xml, 在兩臺服務器的tomcat的配置文件中分別找到:
<Engine name="Catalina" defaultHost="localhost" >
分別修改成:
Tomcat01:(192.168.0.100)
<Engine name="Catalina" defaultHost="localhost" jvmRoute="a">
Tomcat02:(192.168.0.101)
<Engine name="Catalina" defaultHost="localhost" jvmRoute="b"> nginx的修改: nginx_upstream_jvm_route 是一個 Nginx 的擴展模塊,用來實現基於 Cookie 的 Session Sticky 的功能。 安裝方法: 1.先得到nginx_upstream_jvm_route模塊: 地址:http://sh0happly.blog.51cto.com/p_w_upload/201004/1036375_1271836572.zip,解壓後上傳到/root下 2.進入Nginx源碼目錄: cd nginx-0.7.61 patch -p0 < ../nginx-upstream-jvm-route/jvm_route.patch 會出現如下提示: patching file src/http/ngx_http_upstream.c Hunk #1 succeeded at 3869 (offset 132 lines). Hunk #3 succeeded at 4001 (offset 132 lines). Hunk #5 succeeded at 4100 (offset 132 lines). patching file src/http/ngx_http_upstream.h 3.安裝nginx: shell $> ./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route/ shell $> make shell $> make install 4.修改配置,例如: 1.For resin upstream backend { server 192.168.0.100 srun_id=a; #這裏srun_id=a對應的是 server1 resin配置裏的 server id="a" server 192.168.0.101 srun_id=b; jvm_route $cookie_JSESSIONID|sessionid; } 2.For tomcat upstream tomcat { server 192.168.0.100:8080 srun_id=a; #這裏srun_id=a對應的是 tomcat01 配置裏的 jvmRoute="a" server 192.168.0.101:8080 srun_id=b; #這裏srun_id=a對應的是 tomcat02 配置裏的 jvmRoute="b" jvm_route $cookie_JSESSIONID|sessionid reverse; } server { server_name test.com; charset utf-8,GB2312; index index.html; if (-d $request_filename) { rewrite ^/(.*)([^/])$ http://$host/$1$2/ permanent; } location / { proxy_pass http://tomcat/; proxy_redirect off; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; } 在兩臺的tomcat上增長配置: <Host name="test.com" debug="0" appBase="/usr/local/tomcat/apps/" unpackWARs="true" > <Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="crm_log." suffix=".txt" timestamp="true"/> <Context path="" docBase="/usr/local/tomcat/apps/jsp" reloadable="true" debug="0" crossContext="false"> </Context> </Host> 在/usr/local/tomcat/apps/jsp的下面新增index.jsp <HTML> <HEAD> <TITLE>JSP TESTPAGE</TITLE> </HEAD> <BODY> <% String name=request.getParameter("name"); out.println("<h1>this is 192.168.0.100:hello "+name+"!<br></h1>"); #或192.168.0.101 %> </BODY> </HTML> 經過訪問:http://test.com,頁面會一直保持在192.168.0.100的頁面,當清空cookies和session後,再次刷新,頁面會保持在192.168.0.101上。 一個實例:http://hi.baidu.com/scenkoy/blog/item/2cd89da9b57696f71e17a29e .html 測試環境: server1 服務器上安裝了 nginx + tomcat01 server2 服務器上只安裝了 tomcat02 server1 IP 地址: 192.168.2.88 server2 IP 地址: 192.168.2.89 安裝步驟: 1. 在server1 上安裝配置 nginx + nginx_upstream_jvm_route shell $> wget -c http://sysoev.ru/nginx/nginx-0.7.61.tar.gz shell $> svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only shell $> tar zxvf nginx-0.7.61 shell $> cd nginx-0.7.61 shell $> patch -p0 < ../nginx-upstream-jvm-route-read-only/jvm_route.patch shell $> useradd www shell $> ./configure --user=www --group=www --prefix=/usr/local//nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route-read-only shell $> make shell $> make install 2.分別在兩臺機器上安裝 tomcat和java (略) 設置tomcat的server.xml, 在兩臺服務器的tomcat的配置文件中分別找到: <Engine name="Catalina" defaultHost="localhost" > 分別修改成: Tomcat01: <Engine name="Catalina" defaultHost="localhost" jvmRoute="a"> Tomcat02: <Engine name="Catalina" defaultHost="localhost" jvmRoute="b"> 並在webapps下面創建aa文件夾,在裏面創建要測試的index.jsp文件,內容以下: <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <% %> <html> <head> </head> <body> 88 <!--server1 這裏爲 88 --> <br /> <%out.print(request.getSession()) ;%> <!--輸出session--> <br /> <%out.println(request.getHeader("Cookie")); %> <!--輸出Cookie--> </body> </html> 兩個tomcat同樣只須要修改紅色的部分 分別啓動兩個tomcat 3.設置nginx shell $> cd /usr/local/nginx/conf shell $> mv nginx.conf nginx.bak shell $> vi nginx.conf ## 如下是配置 ### user www www; worker_processes 4; error_log logs/nginx_error.log crit; pid /usr/local/nginx/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; events { use epoll; worker_connections 2048; } http { upstream backend { server 192.168.2.88:8080 srun_id=a; server 192.168.2.89:8080 srun_id=b; jvm_route $cookie_JSESSIONID|sessionid reverse; } include mime.types; default_type application/octet-stream; #charset gb2312; charset UTF-8; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 20m; limit_rate 1024k; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; gzip on; #gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; server { listen 80; server_name 192.168.2.88; index index.html index.htm index.jsp; root /var/www; #location ~ .*\.jsp$ location / aa/ { proxy_pass http://backend; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 30d; } location ~ .*\.(js|css)?$ { expires 1h; } location /Nginxstatus { stub_status on; access_log off; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; # access_log off; } } 4.測試 打開瀏覽器,輸入:http://192.168.2.88/aa/ 刷新了N次還都是88,也就是補丁起做用了,cookie 值也得到了,爲了測試,我又打開了「遨遊瀏覽器」(由於session 和 cookie問題因此重新打開別的瀏覽器),輸入網址: http://192.168.2.88/aa/ 顯示89,刷新N次後仍是89,你們測試的時候若是有疑問可一把 nginx 配置文件的 srun_id=a srun_id=b 去掉,而後在訪問,就會知道頁面是輪詢訪問得了!!