Nginx+Keepalived+Tomcat+Memcached 實現雙VIP負載均衡及Session會話保持
IP 信息列表:
名稱 IP 軟件
-------------------------------------------------
VIP1 192.168.200.254
VIP2 192.168.200.253
nginx-1 192.168.200.101 nginx keepalived
nginx-2 192.168.200.102 nginx keepalived
tomcat-1 192.168.200.103 tomcat memcached
tomcat-2 192.168.200.104 tomcat memcached
全部機器關閉防火牆及Selinux:
[root@localhost ~]# service iptables stop
[root@localhost ~]# setenforce 0
安裝配置JDK和Tomcat服務器:
=================================================================================================================
安裝配置JDK:
將jdk-7u65-linux-x64.tar.gz解壓
[root@tomcat-1 ~]# rm -rf /usr/bin/java
[root@tomcat-1 ~]# tar xf jdk-7u65-linux-x64.tar.gz
解壓後會生成jdk1.7.0_65文件夾,將文件夾移動到/usr/local下並重命名爲java
[root@tomcat-1 ~]# mv jdk1.7.0_65/ /usr/local/java
在/etc/profile.d/ 下創建java.sh腳本
[root@tomcat-1 ~]# vim /etc/profile #末尾出追加
export JAVA_HOME=/usr/local/java #設置java根目錄
export PATH=$PATH:$JAVA_HOME/bin #在PATH環境變量中添加java跟目錄的bin子目錄
將java.sh 腳本導入到環境變量,使其生效
[root@tomcat-1 ~]# source /etc/profile
運行 java -version 或者 javac -version 命令查看java版本
[root@tomcat-1 ~]# java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (rhel-2.5.1.2.el6_5-x86_64 u65-b17)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@tomcat-1 ~]# javac -version
javac 1.7.0_65
安裝配置Tomcat:
解壓軟件包
[root@tomcat-1 ~]# tar xf apache-tomcat-7.0.54.tar.gz
解壓後生成apache-tomcat-7.0.54文件夾,將該文件夾移動到/usr/local下,並更名爲tomcat7
[root@tomcat-1 ~]# mv apache-tomcat-7.0.54 /usr/local/tomcat7
啓動Tomcat
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
Using CATALINA_BASE: /usr/local/tomcat7
Using CATALINA_HOME: /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME: /usr/local/java
Using CLASSPATH: /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
Tomcat started.
Tomcat 默認運行在8080端口
[root@tomcat-1 ~]# netstat -anpt |grep :8080
tcp 0 0 :::8080 :::* LISTE
N 55349/java
關閉Tomcat
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh
瀏覽器訪問測試 http://192.168.200.103:8080
創建java的web站點:
首先在跟目錄下創建一個webapp目錄,用於存放網站文件
[root@tomcat-1 ~]# mkdir /webapp
在webapp1目錄下創建一個index.jsp的測試頁面
[root@tomcat-1 ~]# vim /webapp/index.jsp
Server Info:
SessionID:<%=session.getId()%>
<br>
SessionIP:<%=request.getServerName()%>
<br>
SessionPort:<%=request.getServerPort()%>
<br>
<%
out.println("server one");
%>
修改Tomcat的server.xml文件
定義一個虛擬主機,並將網站文件路徑指向已經創建的/webapp,在host段增長context段
[root@tomcat-1 ~]# cp /usr/local/tomcat7/conf/server.xml{,.bak}
[root@tomcat-1 ~]# vim /usr/local/tomcat7/conf/server.xml
124 <Host name="localhost" appBase="webapps"
125 unpackWARs="true" autoDeploy="true">
126 <Context docBase="/webapp" path="" reloadable="flase">
127 </Context>
docBase="/webapp" #web應用的文檔基準目錄
path="" #設置默認"類"
reloadable="flase" #設置監視"類"是否變化
關閉tomcat,在從新啓動
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh
Using CATALINA_BASE: /usr/local/tomcat7
Using CATALINA_HOME: /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME: /usr/local/java
Using CLASSPATH: /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
Using CATALINA_BASE: /usr/local/tomcat7
Using CATALINA_HOME: /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME: /usr/local/java
Using CLASSPATH: /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
Tomcat started.
瀏覽器訪問測試 http://192.168.200.103:8080
=================================================================================================================
Tomcat 2 配置方法基本與Tomcat 1 相同
安裝JDK,配置Java環境,版本與Tomcat 1 保持一致
安裝Tomcat,版本與Tomcat 1 保持一致
[root@tomcat-2 ~]# vim /webapp/index.jsp
Server Info:
SessionID:<%=session.getId()%>
<br>
SessionIP:<%=request.getServerName()%>
<br>
SessionPort:<%=request.getServerPort()%>
<br>
<%
out.println("server two");
%>
[root@tomcat-2 ~]# cp /usr/local/tomcat7/conf/server.xml{,.bak}
[root@tomcat-2 ~]# vim /usr/local/tomcat7/conf/server.xml
124 <Host name="localhost" appBase="webapps"
125 unpackWARs="true" autoDeploy="true">
126 <Context docBase="/webapp" path="" reloadable="flase" >
127 </Context>
[root@tomcat-2 ~]# /usr/local/tomcat7/bin/shutdown.sh
[root@tomcat-2 ~]# /usr/local/tomcat7/bin/startup.sh
瀏覽器訪問測試 http://192.168.200.104:8080
=================================================================================================================
Tomcat 配置相關說明
/usr/local/tomcat7 #主目錄
bin #存放windows或linux平臺上啓動或關閉的Tomcat的腳本文件
conf #存放Tomcat的各類全局配置文件,其中最主要的是server.xml和web.xml
lib #存放Tomcat運行須要的庫文件(JARS)
logs #存放Tomcat執行時的LOG文件
webapps #Tomcat的主要Web發佈目錄(包括應用程序事例)
work #存放jsp編譯後產生的class文件
[root@tomcat-1 ~]# ls /usr/local/tomcat7/conf/
catalina.policy #權限控制配置文件
catalina.properties #Tomcat屬性配置文件
context.xml #上下文配置文件(selinux)
logging.properties #日誌log相關配置文件
server.xml #主配置文件
tomcat-users.xml #manager-gui管理用戶配置文件(Tomcat安裝後生成的管理界面,該文件可開啓訪問)
web.xml #Tomcat的servlet,servlet-mapping,filter,MIME等相關配置
server.xml 主要配置文件,可修改啓動端口,設置網站根目錄,虛擬主機,開啓https等功能。
server.xml的結構構成
<Server>
<Service>
<Connector />
<Engine>
<Host>
<Context> </Context>
</Host>
</Engine>
</Service>
</Server>
<!-- --> 內的內容是注視信息
Server
Server元素表明了整個Catalina的Servlet容器
Service
Service是這樣一個集合;它由一個或多個Connector組成,以及一個Engine,負責處理全部Connector所得到的客戶請求。
Connector
一個Connector在某個指定端口上偵聽客戶請求,並將得到的請求交給Engine來處理,從Engine處得到迴應並返回客戶。
TomcatEngine有兩個典型的Connector,一個直接偵聽來自browser的http請求,一個偵聽來自其餘webserver的請求
Coyote Http/1.1 Connector在端口8009處偵聽來自其餘wenserver(Apache)的servlet/jsp代理請求。
Engine
Engine下能夠配置多個虛擬主機Virtual Host,每一個虛擬主機都有一個域名
當Engine得到一個請求時,它把該請求匹配到某一個Host上,而後把該請求交給該Host來處理,
Engine有一個默認的虛擬主機,當請求沒法匹配到任何一個Host上的時候,將交給該默認Host來處理
Host
表明一個Virtual Host,虛擬主機,每一個虛擬主機和某個網絡域名Domain Name相匹配
每一個虛擬主機下均可以部署(deploy)一個或者多個Web app,每一個web app 對應一個Context,有一個Context path。
當Host得到一個請求時,將把該請求匹配到某個Context上,而後把該請求交給該Context來處理,匹配的方法是最長匹配,因此一個path==「」的Context將成爲該Host的默認Context匹配。
Context
一個Context對應一個 Web application,一個 Web application由一個或者多個Servlet組成。
=================================================================================================================
nginx-1服務器配置:
[root@nginx-1 ~]# yum -y install pcre-devel zlib-devel openssl-devel
[root@nginx-1 ~]# useradd -M -s /sbin/nologin nginx
[root@nginx-1 ~]# tar xf nginx-1.6.2.tar.gz
[root@nginx-1 ~]# cd nginx-1.6.2
[root@nginx-1 nginx-1.6.2]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-file-aio --with-http_stub_status_module --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module && make && make install
--prefix=/usr/local/nginx #指定安裝目錄
--user=nginx --group=nginx #指定運行的用戶和組
--with-file-aio #啓用文件修改支持
--with-http_stub_status_module #啓用狀態統計
--with-http_ssl_module #啓用ssl模塊
--with-http_flv_module #啓用flv模塊,提供尋求內存使用基於時間的偏移量文件
--with-http_gzip_static_module #啓用gzip靜態壓縮
配置nginx.conf
[root@nginx-1 nginx-1.6.2]# cp /usr/local/nginx/conf/nginx.conf{,.bak}
[root@nginx-1 nginx-1.6.2]# vim /usr/local/nginx/conf/nginx.conf
=================================================================================================================
user nginx;
worker_processes 1;
error_log logs/error.log;
pid logs/nginx.pid;
events {
use epoll;
worker_connections 10240;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
upstream tomcat_server {
server 192.168.200.103:8080 weight=1;
server 192.168.200.104:8080 weight=1;
}
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
proxy_pass http://tomcat_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
=================================================================================================================
[root@nginx-1 nginx-1.6.2]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@nginx-1 nginx-1.6.2]# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
[root@nginx-1 nginx-1.6.2]# netstat -anpt |grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 7184/nginx
[root@nginx-1 nginx-1.6.2]# ps aux |grep nginx
root 7184 0.0 0.2 45000 1052 ? Ss 01:18 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
www 7185 0.0 1.1 49256 5452 ? S 01:18 0:00 nginx: worker process
root 7193 0.0 0.1 103256 848 pts/1 S+ 01:18 0:00 grep nginx
客戶端測試:
打開瀏覽器訪問: http://192.168.200.101 #不斷刷新可看到因爲權重相同,頁面會反覆切換
nginx-2服務器配置:
配置方式與服務器nginx-1一致
客戶端測試:
打開瀏覽器訪問: http://192.168.200.102 #不斷刷新可看到因爲權重相同,頁面會反覆切換
=================================================================================================================
工做原理:兩臺Nginx經過Keepalived生成二個實例,二臺Nginx的VIP互爲備份,任何一臺Nginx機器若是發生硬件損壞,Keepalived會自動將它的VIP地址切換到另外一臺機器,不影響客戶端的訪問。
在nginx1/2上編譯安裝keepalived服務:
[root@nginx-1 ~]# yum -y install kernel-devel openssl-devel
[root@nginx-1 ~]# tar xf keepalived-1.2.13.tar.gz
[root@nginx-1 ~]# cd keepalived-1.2.13
[root@nginx-1 keepalived-1.2.13]# ./configure --prefix=/ --with-kernel-dir=/usr/src/kernels/2.6.32-504.el6.x86_64/ && make && make install
[root@nginx-1 ~]# chkconfig --add keepalived
[root@nginx-1 ~]# chkconfig keepalived on
[root@nginx-1 ~]# chkconfig --list keepalived
三、修改keepalived配置文件
[root@nginx-1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
crushlinux@163.com
}
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
192.168.200.254
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
192.168.200.253
}
}
=================================================================================================================
[root@nginx-2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
crushlinux@163.com
}
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
192.168.200.254
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
192.168.200.253
}
}
[root@nginx-1 ~]# service keepalived start
[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.202/24 brd 192.168.200.255 scope global eth0
inet 192.168.200.254/32 scope global eth0
inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
valid_lft forever preferred_lft forever
[root@nginx-2 ~]# service keepalived start
[root@nginx-2 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:6f:7d:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.102/24 brd 192.168.200.255 scope global eth0
inet 192.168.200.253/32 scope global eth0
inet6 fe80::20c:29ff:fe6f:7d87/64 scope link
valid_lft forever preferred_lft forever
客戶端測試:
打開瀏覽器訪問: http://192.168.200.253 #不斷刷新可看到因爲權重相同,頁面會反覆切換
客戶端測試:
打開瀏覽器訪問: http://192.168.200.254 #不斷刷新可看到因爲權重相同,頁面會反覆切換
nginx-1/2 二臺機器都執行監控Nginx進程的腳本
[root@nginx-1 ~]# cat nginx_pidcheck
#!/bin/bash
while :
do
nginxpid=`ps -C nginx --no-header | wc -l`
if [ $nginxpid -eq 0 ]
then
/usr/local/nginx/sbin/nginx
keeppid=$(ps -C keepalived --no-header | wc -l)
if [ $keeppid -eq 0 ]
then
/etc/init.d/keepalived start
fi
sleep 5
nginxpid=`ps -C nginx --no-header | wc -l`
if [ $nginxpid -eq 0 ]
then
/etc/init.d/keepalived stop
fi
fi
sleep 5
done
[root@nginx-1 ~]# sh nginx_pidcheck &
[root@nginx-1 ~]# vim /etc/rc.local
sh nginx_pidcheck &
這是執行無限循環的腳本,兩臺Nginx機器上都有執行此腳本,每隔5秒執行一次,用ps -C是命令來收集nginx的PID值究竟是否爲0,若是是0的話,即Nginx已經進程死掉,嘗試啓動nginx進程;若是繼續爲0,即Nginx啓動失敗,則關閉本機的Keeplaived服務,VIP地址則會由備機接管,固然了,整個網站就會所有由備機的Nginx來提供服務了,這樣保證Nginx服務的高可用。
腳本測試:
[root@nginx-1 ~]# netstat -anpt |grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4321/nginx
[root@nginx-1 ~]# killall -s QUIT nginx
[root@nginx-1 ~]# netstat -anpt |grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 59418/nginx
VIP轉移測試:
[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.101/24 brd 192.168.200.255 scope global eth0
inet 192.168.200.254/32 scope global eth0
inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
valid_lft forever preferred_lft forever
[root@nginx-2 ~]# service keepalived stop
中止 keepalived: [肯定]
[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.101/24 brd 192.168.200.255 scope global eth0
inet 192.168.200.254/32 scope global eth0
inet 192.168.200.253/32 scope global eth0
inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
valid_lft forever preferred_lft forever
客戶端測試:
打開瀏覽器訪問: http://192.168.200.253 #不斷刷新可看到因爲權重相同,頁面會反覆切換
客戶端測試:
打開瀏覽器訪問: http://192.168.200.254 #不斷刷新可看到因爲權重相同,頁面會反覆切換
=================================================================================================================
[root@tomcat-1 ~]# yum -y install gcc openssl-devel pcre-devel zlib-devel
[root@tomcat-1 ~]# tar xf libevent-2.0.15-stable.tar.gz
[root@tomcat-1 ~]# cd libevent-2.0.15-stable
[root@tomcat-1 libevent-2.0.15-stable]# ./configure --prefix=/usr/local/libevent && make && make install
[root@tomcat-1 ~]# tar xf memcached-1.4.5.tar.gz
[root@tomcat-1 ~]# cd memcached-1.4.5
[root@tomcat-1 memcached-1.4.5]# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent/ && make && make install
[root@tomcat-1 memcached-1.4.5]# ldconfig -v |grep libevent
libevent_pthreads-2.0.so.5 -> libevent_pthreads.so
libevent-2.0.so.5 -> libevent.so
libevent_extra-2.0.so.5 -> libevent_extra.so
libevent_core-2.0.so.5 -> libevent_core.so
libevent_openssl-2.0.so.5 -> libevent_openssl.so
libevent_extra-1.4.so.2 -> libevent_extra-1.4.so.2.1.3
libevent_core-1.4.so.2 -> libevent_core-1.4.so.2.1.3
libevent-1.4.so.2 -> libevent-1.4.so.2.1.3
[root@tomcat-1 memcached-1.4.5]# /usr/local/memcached/bin/memcached -u root -m 512M -n 10 -f 2 -d -vvv -c 512
/usr/local/memcached/bin/memcached: error while loading shared libraries: libevent-2.0.so.5: cannot open shared object file: No such file or directory
[root@localhost memcached-1.4.5]# vim /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/libevent/lib/
[root@localhost memcached-1.4.5]# ldconfig
[root@localhost memcached-1.4.5]# /usr/local/memcached/bin/memcached -u root -m 512M -n 10 -f 2 -d -vvv -c 512
選項:
-h #查看幫助信息
-p #指定memcached監聽的端口號默認11211
-l #memcached服務器的ip地址
-u #memcached程序運行時使用的用戶身份必須是root用戶
-m #指定使用本機的多少物理內存存數據默認64M
-c #memcached服務的最大連接數
-vvv #顯示詳細信息
-n #chunk size 的最小空間是多少單位字節
-f #chunk size大小增加的倍數默認 1.25倍
-d #在後臺啓動
[root@tomcat-1 ~]# netstat -antp| grep :11211 #(檢測memecached是否存活,memcacehd 端口爲11211)
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 71559/memcached
tcp 0 0 :::11211 :::* LISTEN 71559/memcached
測試memcached 可否存取數據
[root@tomcat-1 ~]# yum -y install telnet
[root@localhost ~]# telnet 192.168.200.103 11211
set username 0 0 8
zhangsan
STORED
get username
VALUE username 0 8
zhangsan
END
quit
Connection closed by foreign host.
最後執行讓Tomcat-1 Tomcat-2 經過(msm)鏈接到Memcached
將session包中的「*.jar複製到/usr/local/tomcat7/lib/ 下面
[root@tomcat-1 ~]# cp session/* /usr/local/tomcat7/lib/
編輯tomcat 配置文件鏈接指定的 memcached服務器
tomcat-1 和 tomcat-2 配置文件如出一轍,都按照一下樣例寫
[root@tomcat-1 ~]# vim /usr/local/tomcat7/conf/context.xml
<Context>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="memA:192.168.200.104:11211 memB:192.168.200.105:11211"
requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
[root@tomcat-2 ~]# vim /usr/local/tomcat7/conf/context.xml
<Context>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="memA:192.168.200.104:11211 memB:192.168.200.105:11211"
requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
若是成功,tomcat與Memcached 端口會連在一塊兒,先後有變化
Tomcat-1與Tomcat-2以下圖
[root@tomcat-1 ~]# netstat -antp|grep java
tcp 0 0 ::ffff:127.0.0.1:8005 :::* LISTEN 62496/java
tcp 0 0 :::8009 :::* LISTEN 62496/java
tcp 0 0 :::8080 :::* LISTEN 62496/java
tcp 0 0 ::ffff:192.168.200.10:28232 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java
tcp 0 0 ::ffff:192.168.200.10:28231 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java
tcp 0 0 ::ffff:192.168.200.10:28230 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java
tcp 0 0 ::ffff:192.168.200.10:28228 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java
tcp 0 0 ::ffff:192.168.200.10:28229 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java
[root@tomcat-1 ~]# netstat -antp|grep memcached
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 62402/memcached
tcp 0 0 192.168.200.103:11211 192.168.200.103:28230 ESTABLISHED 62402/memcached
tcp 45 0 192.168.200.103:11211 192.168.200.103:28228 ESTABLISHED 62402/memcached
tcp 0 0 192.168.200.103:11211 192.168.200.103:28232 ESTABLISHED 62402/memcached
tcp 0 0 192.168.200.103:11211 192.168.200.103:28229 ESTABLISHED 62402/memcached
tcp 0 0 192.168.200.103:11211 192.168.200.103:28231 ESTABLISHED 62402/memcached
tcp 0 0 :::11211 :::* LISTEN 62402/memcached
css