一、Nginx+Keepalived實現站點高可用css
linux cluster類型html
LB:nginx負載,varnish(director module)haproxy,lvs前端
HA:keepalived,heartbeat 採用冗餘方式爲活動設備提供備用設備,活動設備出現故障時,備用設備主動代替活動設備工做java
HP:node
keepalived 主要是經過vrrp虛擬路由虛擬路由冗餘協議實現ip地址轉移,結合api接口腳本實現高可用linux
keepalived實現過程nginx
準備兩臺機器git
192.168.1.198 github
192.168.1.196web
兩臺機器都要同步時間 ntpdate ntp1.aliyun.com
關閉防火牆或者修改防火牆規則放行keepalive的報文
keepalive的被收錄在base倉庫中,可直接安裝
yum install keepalived 兩臺節點都安裝keepalived
keepalived的三個大配置配置
GLOBAL CONFIGURATION #全局配置
VRRPD CONFIGURATION #VRRP虛擬路由配置
LVS CONFIGURATION #LVS相關的配置
簡單配置示例
! Configuration File for keepalived
global_defs { #全局配置
notification_email { #配置郵件地址
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1 #郵件地址
smtp_connect_timeout 30#超時時長
router_id node1.com #主機id
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.0.1 #組播地址,用於發通告信息
vrrp_iptables
}
vrrp_instance VI_1 { #這是一個實例 虛擬路由
state MASTER #表示爲主節點
interface ens33 #在本身真實網卡配置
virtual_router_id 51 #配置一個id
priority 100 #優先級
advert_int 1
authentication { #跟驗證有關
auth_type PASS #驗證類型
auth_pass 1111 #密碼
}
virtual_ipaddress { #定義虛擬路由的ip地址 接口,和標籤
192.168.1.254/24 brd 192.168.1.255 dev ens33 label ens33:1
}
}
配置完須要將這個配置文件拷貝至另一臺備用機器,而且須要將 state master 改爲 state backup,優先級須要改。改完開啓服務便可生效
keepalived消息通知機制
經過notify調用腳本實現通知機制
# notify scripts, alert as above
notify_master <STRING>|<QUOTED-STRING> [username [groupname]] #在示例中定義腳本,主機轉爲主節點時通知
notify_backup <STRING>|<QUOTED-STRING> [username [groupname]] #主機轉爲備用機器時的腳本
notify_fault <STRING>|<QUOTED-STRING> [username [groupname]] #主機宕機時調用的腳本
notify_stop <STRING>|<QUOTED-STRING> [username [groupname]] # executed when stopping vrrp #實例中止時使用腳本
notify <STRING>|<QUOTED-STRING> [username [groupname]]
通知腳本的使用方式:
示例通知腳本:
#!/bin/bash
#
contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
}
case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
腳本的調用方法:
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
高可用的ipvs集羣示例:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 10.1.0.93 80 { #虛擬服務。vip地址
delay_loop 3 #對後端real server 3秒檢測一次
lb_algo rr#算法
lb_kind DR#lvs類型
protocol TCP
sorry_server 127.0.0.1 80 #say sorry服務器
real_server 10.1.0.69 80 { #後端真實服務器
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 10.1.0.71 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
實驗過程
準備機器
ipvs,以及keepalived部署在兩臺機器中192.168.1.196 198 後端realserver 部署兩臺nginx 192.168.1.201 202
在前端機器部署nginx。用於實現後端機器宕機時say sorry
設定後端real主機參數,使用DR類型,設定腳本,修改arp報文參數。並添加ip地址
在兩臺real server 中執行
#!/bin/bash
vip=192.168.1.254 #設置爲虛擬路由的ip地址
interface="lo:0"
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_ignore
ifconfig $interface $vip netmask 255.255.255.255 broadcast $vip up
route add -host $vip $interface
;;
stop)
ifconfig $interface down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
;;
*)
echo canshu cuowu
esac
修改配置文件,添加virtual_server字段,在後端添加兩臺real服務字段。會自動生成ipvsadm規則
停掉一臺real server 斷開鏈接幾秒後會所有調度到real 1中
keepalived調用外部的輔助腳本進行資源監控,並根據監控的結果狀態能實現優先動態調整;
分兩步:(1) 先定義一個腳本;(2) 在vrrp實例中調用此腳本;
vrrp_script <SCRIPT_NAME> {
script ""
interval INT
weight -INT
rise 2
fall 3
}
track_script {
SCRIPT_NAME_1
SCRIPT_NAME_2
...
}
注意:
vrrp_script chk_down {
script "/bin/bash -c '[[ -f /etc/keepalived/down ]]' && exit 1 || exit 0"
interval 1
weight -10
}
[[ -f /etc/keepalived/down ]]要特別地做爲bash的參數的運行!
示例:高可用nginx服務
修改keepalived配置文件,添加一個外部腳本,檢測nginx服務。若是出現故障則自動重啓nginx
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_script chk_nginx {
script "killall -0 nginx && exit 0 || exit 1"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
track_script {
chk_down
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
二、實現keepalived主主模型
雙主模型
須要配置兩個實例路由,一個主機做爲一個實例的主,一個實例的備
! Configuration File for keepalived
global_defs { #全局配置
notification_email { #配置郵件地址
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1 #郵件地址
smtp_connect_timeout 30#超時時長
router_id node1.com #主機id
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.0.1 #組播地址,用於發通告信息
vrrp_iptables
}
vrrp_instance VI_1 { #這是一個實例 虛擬路由
state MASTER #表示爲主節點
interface ens33 #在本身真實網卡配置
virtual_router_id 51 #配置一個id
priority 100 #優先級
advert_int 1
authentication { #跟驗證有關
auth_type PASS #驗證類型
auth_pass 1111 #密碼
}
virtual_ipaddress { #定義虛擬路由的ip地址 接口,和標籤
192.168.1.254/24 brd 192.168.1.255 dev ens33 label ens33:1
}
}
vrrp_instance VI_2 { #定義第二個虛擬路由
state BACKUP #在這個路由中本機爲備用節點
interface ens33 #網卡名
virtual_router_id 55 #id不能和第一個相同
priority 98 #優先級。由於是備用。優先級不能過高
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { #定義另一個地址,本身做爲此地址的備用地址
192.168.1.253/24 brd 192.168.1.255 dev ens33 label ens33:3
}
}
在本機定義好以後須要複製到另一個節點,在另外一個節點將第二個虛擬路由配置爲主節點
配置成功
systemctl start keepalived啓動服務我這裏先啓動第二臺機器
啓動以後第二臺機器會獲取兩個地址,通告通告兩次,一次爲id爲55的,優先級100,(這是第二個虛擬路由的master)一次爲id爲51的,優先級爲99,這是第一臺虛擬路由,爲備用節點
如今啓動第一臺機器 systemctl start keepalived
啓動以後他會搶佔本機做爲優先級的虛擬路由設備的ip地址做爲主節點
三、Haproxy+Keepalived實現站點高可用
建立haproxy腳本
設置可執行權限chmod +x check_haproxy.sh,腳本內容以下:
#!/bin/bash
#auto check haprox process
killall -0 haproxy
if
[[ $? -ne 0 ]];then
/etc/init.d/keepalived stop
fi
haproxy+keealived Master端keepalived.conf配置文件以下:
! Configuration File for keepalived
global_defs {
notification_email {
xxx@139.com
}
notification_email_from wgkgood@139.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "/data/sh/check_haproxy.sh"
interval 2
weight 2
}
# VIP1
vrrp_instance VI_1 {
state MASTER
interface eth0
lvs_sync_daemon_inteface eth0
virtual_router_id 151
priority 100
advert_int 5
nopreempt
authentication {
auth_typePASS
auth_pass 2222
}
virtual_ipaddress {
192.168.0.133
}
track_script {
chk_haproxy
}
}
1.1.6建立haproxy腳本
設置可執行權限chmod +x check_haproxy.sh,腳本內容以下:
#!/bin/bash
#auto check haprox process
killall -0 haproxy
if
[[ $? -ne 0 ]];then
/etc/init.d/keepalived stop
fi
Haproxy+keealived Backup端keepalived.conf配置文件以下:
! Configuration File for keepalived
global_defs {
notification_email {
xxx@139.com
}
notification_email_from wgkgood@139.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "/data/sh/check_haproxy.sh"
interval 2
weight 2
}
# VIP1
vrrp_instance VI_1 {
state BACKUP
interface eth0
lvs_sync_daemon_inteface eth0
virtual_router_id 151
priority 90
advert_int 5
nopreempt
authentication {
auth_typePASS
auth_pass 2222
}
virtual_ipaddress {
192.168.0.133
}
track_script {
chk_haproxy
}
}
四、搭建tomcat服務器,並經過nginx反向代理訪問
軟件架構模式:
分層架構;表現層,業務層,持久層,數據庫層
事件驅動架構;分佈式異步架構,
微內核架構,及插件式架構
微服務架構,
jdk:java 開發工具箱
servlet:java用於開發web服務器網頁類庫
安裝jdk工具,這裏使用openjdk
yum install java-1.8.0-openjdk-devel #安裝devel版本,會自動解決其餘依賴關係
wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-8/v8.5.45/bin/apache-tomcat-8.5.45.tar.gz #下載tomcat二進制安裝包
tar xf apache-tomcat-8.5.45.tar.gz -C /usr/local/ #解壓至usr/local目錄中
ln -s apache-tomcat-8.5.45.tar.gz tomcat #建立軟鏈接方便之後修改
useradd tomcat #添加用戶,修改屬組 ,tomcat默認以普通身份運行,須要修改文件權限
chown -R .tomcat .
chmod g+r conf/*
chmod g+rx conf/
chown -R tomcat logs/ temp/ work/
vim /etc/profile.d/cols.sh #修改tomcat命令行配置。
PS1='[\e[32;40m\u@\h \W\e[m]$ '
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/tomcat/bin
catalina.sh start #啓動tomcat
8009爲管理接口,8080提供服務
tomcat內部關鍵 類
Tomcat的核心組件:server.xml
<Server>
<Service>
<connector/>
<connector/>
...
<Engine>
<Host>
<Context/>
<Context/>
...
</Host>
<Host>
...
</Host>
...
</Engine>
</Service>
</Server>
每個組件都由一個Java「類」實現,這些組件大致可分爲如下幾個類型:
頂級組件:Server
服務類組件:Service
鏈接器組件:http, https, ajp(apache jserv protocol)
容器類:Engine, Host, Context
被嵌套類:valve, logger, realm, loader, manager, ...
集羣類組件:listener, cluster, ...
部署(deploy)webapp的相關操做:
deploy:將webapp的源文件放置於目標目錄(網頁程序文件存放目錄),配置tomcat服務器可以基於web.xml和context.xml文件中定義的路徑來訪問此webapp;將其特有的類和依賴的類經過class loader裝載至JVM;
部署有兩種方式:
自動部署:auto deploy
手動部署:
冷部署:把webapp複製到指定的位置,然後才啓動tomcat;
熱部署:在不中止tomcat的前提下進行部署;
部署工具:manager、ant腳本、tcd(tomcat client deployer)等;
undeploy:反部署,中止webapp,並從tomcat實例上卸載webapp;
start:啓動處於中止狀態的webapp;
stop:中止webapp,再也不向用戶提供服務;其類依然在jvm上;
redeploy:從新部署;
JSP WebAPP的組織結構:
/: webapps的根目錄
index.jsp, index.html:主頁;
WEB-INF/:當前webapp的私有資源路徑;一般用於存儲當前webapp的web.xml和context.xml配置文件;
META-INF/:相似於WEB-INF/;
classes/:類文件,當前webapp所提供的類;
lib/:類文件,當前webapp所提供的類,被打包爲jar格式;
tomcat的配置文件構成:
server.xml:主配置文件;
web.xml:每一個webapp只有「部署」後才能被訪問,它的部署方式一般由web.xml進行定義,其存放位置爲WEB-INF/目錄中;此文件爲全部的webapps提供默認部署相關的配置;
context.xml:每一個webapp均可以專用的配置文件,它一般由專用的配置文件context.xml來定義,其存放位置爲WEB-INF/目錄中;此文件爲全部的webapps提供默認配置;
tomcat-users.xml:用戶認證的帳號和密碼文件;
catalina.policy:當使用-security選項啓動tomcat時,用於爲tomcat設置安全策略;
catalina.properties:Java屬性的定義文件,用於設定類加載器路徑,以及一些與JVM調優相關參數;
logging.properties:日誌系統相關的配置; log4j
手動提供一測試類應用,並冷部署: #示例
# mkidr -pv /usr/share/tomcat/webapps/myapp/{classes,lib,WEB-INF}
建立文件/usr/local/tomcat/myapp/test/index.jsp
<%@ page language="java" %>
<%@ page import="java.util.*" %>
<html>
<head>
<title>Test Page</title>
</head>
<body>
<% out.println("hello world");
%>
</body>
</html>
#將index文件放再myapp目錄中,index.jsp文件會自動部署
work目錄中記錄了代碼的轉換以後的源代碼
登陸gui的tomcat後端
默認訪問tomcat後臺時會提示咱們輸入帳戶密碼,須要在tomcat-user文件中啓用帳戶,而且關聯至對應帳戶
<role rolename="admin-gui"/> #開啓圖形界面 管理端口的帳戶
<user name="admin" password="adminadmin" roles="admin,manager,admin-gui,admin-script,manager-gui,manager-scrip
t,manager-jmx,manager-status" /> #建立一個帳戶,關聯至gui接口,這裏關聯了多個帳戶,
tomcat的經常使用組件配置:
Server:表明tomcat instance,即表現出的一個java進程;監聽在8005端口,只接收「SHUTDOWN」。各server監聽的端口不能相同,所以,在同一物理主機啓動多個實例時,須要修改其監聽端口爲不一樣的端口;
Service:用於實現將一個或多個connector組件關聯至一個engine組件;
Connector組件:端點
負責接收請求,常見的有三類http/https/ajp;
進入tomcat的請求可分爲兩類:
(1) standalone : 請求來自於客戶端瀏覽器;
(2) 由其它的web server反代:來自前端的反代服務器;
nginx --> http connector --> tomcat
httpd(proxy_http_module) --> http connector --> tomcat
httpd(proxy_ajp_module) --> ajp connector --> tomcat
httpd(mod_jk) --> ajp connector --> tomcat
屬性:
port="8080"
protocol="HTTP/1.1"
connectionTimeout="20000" #單位毫秒
address:監聽的IP地址;默認爲本機全部可用地址;
maxThreads:最大併發鏈接數,默認爲200;
enableLookups:是否啓用DNS查詢功能;
acceptCount:等待隊列的最大長度;
secure:
sslProtocol:
Engine組件:Servlet實例,即servlet引擎,其內部能夠一個或多個host組件來定義站點; 一般須要經過defaultHost屬性來定義默認的虛擬主機;
屬性:
name=
defaultHost="localhost"
jvmRoute=
Host組件:位於engine內部用於接收請求並進行相應處理的主機或虛擬主機,示例:
<Host name="localhost" appBase="webapps" #tomcat僅支持基於主機名的識別虛擬主機
unpackWARs="true" autoDeploy="true">
</Host>
Webapp ARchives
經常使用屬性說明:
(1) appBase:此Host的webapps的默認存放目錄,指存放非歸檔的web應用程序的目錄或歸檔的WAR文件目錄路徑;可使用基於$CATALINA_BASE變量所定義的路徑的相對路徑;
(2) autoDeploy:在Tomcat處於運行狀態時,將某webapp放置於appBase所定義的目錄中時,是否自動將其部署至tomcat;
示例:
<Host name="tc1.magedu.com" appBase="/appdata/webapps" unpackWARs="true" autoDeploy="true">
</Host>
# mkdir -pv /appdata/webapps
# mkdir -pv /appdata/webapps/ROOT/{lib,classes,WEB-INF}
提供一個測試頁便可;
Context組件:
示例:
#URL路徑,本地文件路徑,是否支持重載
<Context path="/PATH" docBase="/PATH/TO/SOMEDIR" reloadable=""/>
Valve組件:
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
#官方文檔日誌 https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/AccessLogValve.html
Valve存在多種類型:
定義訪問日誌:org.apache.catalina.valves.AccessLogValve
定義訪問控制:org.apache.catalina.valves.RemoteAddrValve
<Valve className="org.apache.catalina.valves.RemoteAddrValve" deny="172\.16\.100\.67"/>
nginx實現反代
Client (http) --> nginx (reverse proxy)(http) --> tomcat (http connector) #本機實現反代
location / {
proxy_pass http://tc1.magedu.com:8080;
}
location ~* \.(jsp|do)$ {
proxy_pass http://tc1.magedu.com:8080;
}
由於圖片和jsp的路徑不在一塊,反代時沒有location圖片位置路徑,因此代理時加載不了圖片
LAMT:Linux Apache(httpd) MySQL Tomcat
httpd的代理模塊:
proxy_module
proxy_http_module:適配http協議客戶端;
proxy_ajp_module:適配ajp協議客戶端;
Client (http) --> httpd (proxy_http_module)(http) --> tomcat (http connector)
Client (http) --> httpd (proxy_ajp_module)(ajp) --> tomcat (ajp connector)
Client (http) --> httpd (mod_jk)(ajp) --> tomcat (ajp connector)
proxy_http_module代理配置示例:
<VirtualHost *:80>
ServerName tc1.magedu.com
ProxyRequests Off
ProxyVia On
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://tc1.magedu.com:8080/
ProxyPassReverse / http://tc1.magedu.com:8080/
<Location />
Require all granted
</Location>
</VirtualHost>
<LocationMatch "\.(jsp|do)$>
ProxyPass / http://tc1.magedu.com:8080/
</LocationMatch>
proxy_ajp_module代理配置示例:
<VirtualHost *:80>
ServerName tc1.magedu.com
ProxyRequests Off
ProxyVia On
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / ajp://tc1.magedu.com:8009/
ProxyPassReverse / ajp://tc1.magedu.com:8009/
<Location />
Require all granted
</Location>
</VirtualHost>
對tomcat作負載均衡
docker pull tomcat:8.5-slim #拉取tomcat鏡像,做爲後端服務器
docker run --name tc1 --hostname tc1.com -d -v /data/tc1:/usr/local/tomcat/webapps/myapp tomcat:8.5-slim
docker run --name tc2 --hostname tc2.com -d -v /data/tc1:/usr/local/tomcat/webapps/myapp tomcat:8.5-slim #啓動容器綁定掛載卷,指定主機名
[root@centos7 tc1]$ mkdir -p lib classes WEB-INF #建立目錄,和index.jsp 須要在兩臺機器上建立此index文件
[root@centos7 tc1]$ vim index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
修改nginx配置文件定義負載集羣主機組及反代的配置
upstream tcsrvs {
server 172.17.0.2:8080;
server 172.17.0.3:8080;
}
location /myapp/ {
proxy_pass http://tcsrvs/myapp/;
}
httpd會話粘性的實現方法:
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://tcsrvs>
BalancerMember http://172.18.100.67:8080 route=TomcatA loadfactor=1
BalancerMember http://172.18.100.68:8080 route=TomcatB loadfactor=2
ProxySet lbmethod=byrequests
ProxySet stickysession=ROUTEID
</Proxy>
<VirtualHost *:80>
ServerName lb.magedu.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
</VirtualHost>
啓用管理接口:
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted
</Location>
示例程序:
演示效果,在TomcatA上某context中(如/test),提供以下頁面
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
演示效果,在TomcatB上某context中(如/test),提供以下頁面
<%@ page language="java" %>
<html>
<head><title>TomcatB</title></head>
<body>
<h1><font color="blue">TomcatB.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
第二種方式:
<proxy balancer://tcsrvs>
BalancerMember ajp://172.18.100.67:8009
BalancerMember ajp://172.18.100.68:8009
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName lb.magedu.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted
</Location>
</VirtualHost>
保持會話的方式參考前一種方式。
Tomcat Session Replication Cluster:
(1) 配置啓用集羣,將下列配置放置於<engine>或<host>中;
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
確保Engine的jvmRoute屬性配置正確。
(2) 配置webapps
編輯WEB-INF/web.xml,添加<distributable/>元素;
注意:CentOS 7上的tomcat自帶的文檔中的配置示例有語法錯誤;
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
綁定的地址爲auto時,會自動解析本地主機名,並解析得出的IP地址做爲使用的地址;
五、搭建Tomcat,並基於memcached實現會話共享
https://github.com/magro/memcached-session-manager/wiki/SetupAndConfiguration 藉助msm部署 mamcached session manager 的java擴展庫實現
搭建後端tomcat會話replication cluster
後端tomcat 服務器地址 192.168.80.134 192.168.80.130
前端調度器nginx地址 192.168.80.133,192.168.1.196
先下載對應的擴張jar包
wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager/2.3.2/memcached-session-manager-2.3.2.jar
wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager-tc7/2.3.2/memcached-session-manager-tc7-2.3.2.jar
wget http://repo1.maven.org/maven2/net/spy/spymemcached/2.12.3/spymemcached-2.12.3.jar
wget http://repo1.maven.org/maven2/de/javakaffee/msm/msm-kryo-serializer/2.3.2/msm-kryo-serializer-2.3.2.jar
wget http://repo1.maven.org/maven2/com/esotericsoftware/kryo/4.0.2/kryo-4.0.2.jar
wget http://repo1.maven.org/maven2/de/javakaffee/kryo-serializers/0.42/kryo-serializers-0.42.jar
wget http://repo1.maven.org/maven2/com/esotericsoftware/minlog/1.3.0/minlog-1.3.0.jar
wget http://repo1.maven.org/maven2/com/esotericsoftware/reflectasm/1.11.7/reflectasm-1.11.7.jar
wget http://repo1.maven.org/maven2/org/ow2/asm/asm/6.2/asm-6.2.jar
wget http://repo1.maven.org/maven2/org/objenesis/objenesis/2.6/objenesis-2.6.jar
mv /etc/tomcat/*.jar . #把全部下載的jav包放到tomcat擴展庫目錄 /usr/share/java/tomcat/ 目錄中
vim /etc/tomcat/server.xml#修改配置文件。在context中增長別名目錄。而且加載memcached節點端口實現共享會話
後端兩臺機器一樣的這樣操做,修改細節便可,如ip地址等等
<Context path="/myapp" docBase="/webapps/myapp" reloadable="">
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="m1:192.168.80.134:11211,m2:192.168.80.130:11211"
failoverNodes="m1"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
#啓動memcached 服務
systemctl start memcached
啓動tomcat
六、搭建Nginx+Tomcat服務
搭建後端tomcat會話replication cluster
後端tomcat 服務器地址 192.168.80.132 192.168.80.130
前端調度器nginx地址 192.168.80.133,192.168.1.196
安裝jdk,tomcat軟件包
yum install java-1.8.0-openjdk-devel tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp -y
建立建立測試頁目錄及測試頁#後端兩臺機器都要操做
mkdir /webapps/myapp/{lib,class,WEB-INF} -pv
vim /webapps/myapp/index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
#修改tomcat配置文件
添加官方推薦的集羣配置文件
https://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.80.132"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
在host配置端配置一個別名。指向咱們剛剛建立的目錄
安裝官方文檔提示,修改web.xml文件加入 <distributable/>字段
[root@centos7 tomcat]# cp web.xml /webapps/myapp/WEB-INF/
vim web.xml
啓動服務
修改nginx配置文件
upstream tcsrvs {
server 192.168.80.130:8080;
server 192.168.80.132:8080;
}
location / {
proxy_pass http://tcsrvs;
}