d.在node4上安裝apache+tomcathtml
安裝jdk1.7.0_67 須要依賴JDK才能運行 jdk1.7.0_67和apache-tomcat-7.0.55.tar.gz存在兼容性問題 apache-tomcat-7.0.42.tar.gz前端
#rpm -ivh jdk-7u67-linux-x64.rpmjava
# tar xf apache-tomcat-7.0.55.tar.gz-C /usr/local/node
導出命令mysql
# cat/etc/profile.d/java.sh linux
exportJAVA_HOME=/usr/java/jdk1.7.0_67 這裏是固定格式的c++
exportPATH=$JAVA_HOME/bin:$PATH程序員
#./etc/profile.d/java.shweb
# ln -svapache-tomcat-7.0.55 tomcatsql
# cat/etc/profile.d/tomcat.sh
exportCATALINA_HOME=/usr/local/tomcat
exportPATH=$CATALINA_HOME/bin:$PATH
做爲tomcat的啓動腳本,添加到/etc/rc.d/init.d/tomcat文件中
#!/bin/sh
# Tomcat initscript for Linux.
#
# chkconfig: 234596 14
# description: TheApache Tomcat servlet/JSP container.
# JAVA_OPTS='-Xms64m-Xmx128m'
JAVA_HOME=/usr/java/latest
CATALINA_HOME=/usr/local/tomcat
export JAVA_HOMECATALINA_HOME
case $1 in
start)
exec $CATALINA_HOME/bin/catalina.sh start ;;
stop)
exec $CATALINA_HOME/bin/catalina.sh stop;;
restart)
$CATALINA_HOME/bin/catalina.sh stop
sleep 2
exec $CATALINA_HOME/bin/catalina.sh start ;;
*)
echo "Usage: `basename $0`{start|stop|restart}"
exit 1
;;
esac
給其執行權限
# chmod +x/etc/rc.d/init.d/tomcat
添加到服務列表中
# chkconfig --addtomcat
啓動tomcat
service tomcatstart
ss -tnlp | grep8080
在<Host>組後面添加
<Hostname="www.tree.com" appBase="/tomcat"
unpackWARs="true"autoDeploy="true">
<Context path=""docBase="webapps" reLoadable="true" />
</Host>
或者是
<Host name="www.tree.com" appBase="/tomcat/webapps"
unpackWARs="true"autoDeploy="true">
</Host>
可是資源須要在/tomcat/webapps/ROOT目錄下
修改
<Enginename="Catalina" defaultHost="localhost">爲<Enginename="Catalina" defaultHost="www.tree.com">
修改監聽的端口爲80,默認是8080
<Connectorport="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"/>
建立目錄路徑:
# tree /tomcat/
/tomcat/
└── webapps
├── index.jsp
其中/tomcat/webapps/test/index.jsp測試頁內容以下:
# cat /tomcat/webapps/index.jsp
<%@ page import="java.util.*" %>
<html>
<head>
<title> JSP TestPage</title>
</head>
<body>
<%
out.println("Hello How are you.");
out.println("Hello there.");
%>
</body>
</html>
#catalina.sh start
查看端口是否起來
#ss -tnl
訪問測試
http://192.168.21.166
在tomcat上,多個應用就多個tomcat,不建議Context,也不建議虛擬主機,單機多實例去運行。
編譯安裝httpd
準備好環境
# yuminstall -y gcc gcc-c++ pcre-devel openssl-devel
# tar xf apr-1.4.6.tar.bz2 -C /usr/local
# cd apr-1.4.6
# ./configure--prefix=/usr/local/apr
# make &&make install
apr-util是apr的工具庫,其可讓程序員更好的使用apr的功能。能夠從http://apr.apache.org/獲取apr源碼,目前最新的版本是1.4.1。
# tar xfapr-util-1.4.1.tar.bz2 -C /usr/local
# cdapr-util-1.4.1
# ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr
# make && make install
安裝apache
httpd目前最新的2.4系列版本中引入了event MPM,其在性能上較之其它MPM有了較大的提高,
# tar xfhttpd-2.4.2 -C /usr/local
# cd httpd-2.4.2
# ./configure--prefix=/usr/local/apache --sysconfdir=/etc/httpd --enable-so --enable-ssl --enable-cgi--enable-rewrite --with-zlib --with-pcre --with-apr=/usr/local/apr--with-apr-util=/usr/local/apr-util --enable-mpms-shared=all --with-mpm=event--enable-proxy --enable-proxy-http --enable-proxy-ajp --enable-proxy-balancer --enable-lbmethod-heartbeat--enable-heartbeat --enable-slotmem-shm --enable-slotmem-plain --enable-watchdog
# make &&make install
爲apache提供init腳本,實現服務的控制。創建/etc/rc.d/init.d/httpd文件,並添加以下內容:
這是個腳本文件,所以須要執行權限;同時,爲了讓httpd服務可以開機自動啓動,還須要將其添加至服務列表。
# cat/etc/rc.d/init.d/httpd
#
#httpd Startup script for theApache HTTP Server
#
#chkconfig: - 85 15
#description: Apache is a World Wide Web server. It is used to serve \
# HTML files and CGI.
#processname: httpd
#Source function library.
./etc/rc.d/init.d/functions
if [-f /etc/sysconfig/httpd ]; then
. /etc/sysconfig/httpd
fi
#Start httpd in the C locale by default.
HTTPD_LANG=${HTTPD_LANG-"C"}
#This will prevent initlog from swallowing up a pass-phrase prompt if
#mod_ssl needs a pass-phrase from the user.
INITLOG_ARGS=""
# SetHTTPD=/usr/sbin/httpd.worker in /etc/sysconfig/httpd to use a server
#with the thread-based "worker" MPM; BE WARNED that some modules maynot
#work correctly with a thread-based MPM; notably PHP will refuse to start.
#Path to the apachectl script, server binary, and short-form for messages.
apachectl=/usr/local/apache/bin/apachectl
httpd=${HTTPD-/usr/local/apache/bin/httpd}
prog=httpd
pidfile=${PIDFILE-/var/run/httpd.pid}
lockfile=${LOCKFILE-/var/lock/subsys/httpd}
RETVAL=0
start(){
echo -n $"Starting $prog: "
LANG=$HTTPD_LANG daemon--pidfile=${pidfile} $httpd $OPTIONS
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch${lockfile}
return $RETVAL
}
stop(){
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d 10 $httpd
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${lockfile}${pidfile}
}
reload(){
echo -n $"Reloading $prog: "
if ! LANG=$HTTPD_LANG $httpd $OPTIONS -t>&/dev/null; then
RETVAL=$?
echo $"not reloading due toconfiguration syntax error"
failure $"not reloading $httpd dueto configuration syntax error"
else
killproc -p ${pidfile} $httpd -HUP
RETVAL=$?
fi
echo
}
# Seehow we were called.
case"$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $httpd
RETVAL=$?
;;
restart)
stop
start
;;
condrestart)
if [ -f ${pidfile} ] ; then
stop
start
fi
;;
reload)
reload
;;
graceful|help|configtest|fullstatus)
$apachectl $@
RETVAL=$?
;;
*)
echo $"Usage: $prog{start|stop|restart|condrestart|reload|status|fullstatus|graceful|help|configtest}"
exit 1
esac
exit$RETVAL
#chmod +x /etc/rc.d/init.d/httpd
#chkconfig --add httpd
啓動時須要先修改tomcat使用HTTP協議監聽的端口
# service httpdstart 啓動時,顯示正常,可是查看不到監聽的端口,status查看狀態是沒有起來的,查看錯誤日誌 /usr/local/apache/logs/error_log
[Thu Aug 0608:01:25.328782 2015] [proxy_balancer:emerg] [pid 3966:tid 139927044081408]AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shmloaded??
[Thu Aug 0608:01:25.336850 2015] [:emerg] [pid 3966:tid 139927044081408] AH00020:Configuration Failed, exiting
在/etc/httpd/httpd.conf中啓用
LoadModuleslotmem_shm_module modules/mod_slotmem_shm.so,然後再啓動httpd時,就能夠正常了,可是查看錯誤日誌還有信息
[Thu Aug 0608:04:39.837549 2015] [lbmethod_heartbeat:notice] [pid 4005:tid139928518989568] AH02282: No slotmem from mod_heartmonitor
把此模塊註釋了
#LoadModulelbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
此時啓動httpd,錯誤日誌中就不會有錯誤提示了
可是此時關閉httpd時,提示錯誤,並且關閉不了httpd,查看之後發現,啓動httpd後,其pid文件根本不是放在/var/run/httpd.pid中,而是在/usr/local/apache/logs/httpd.pid,因此httpd的啓動腳本/etc/rc.d/init.d/httpd須要改一下pidfile那項,改成以下:
原
pidfile=${PIDFILE-/var/run/httpd.pid}
改
pidfile=${PIDFILE-/usr/local/apache/logs/httpd.pid}
或者是在/etc/httpd/httpd.conf配置文件中指定PidFile=/var/run/httpd.pid
修改啓用ServerName
此時httpd就能夠正常啓動和關閉了
Apache以mod_proxy方式和tomcat結合
在主配置文件/etc/httpd/httpd.conf尾部添加以下內容:
ProxyRequestsOff
ProxyPass/ http://192.168.21.166:8080/
ProxyPa***everse/ http://192.168.21.166:8080/
<Proxy*>
Require all granted
</Proxy>
<Location/>
Require all granted
</Location> 也能夠把此內容添加到虛擬主機中,須要把主配置文件中的DocumentRoot給註釋了。然後在虛擬配置文件中添加以下內容:
<VirtualHost*:80>
ProxyVia Off
ProxyRequests Off
ProxyPass / http://192.168.21.166:8080/
ProxyPa***everse /http://192.168.21.166:8080/
<Proxy *>
Require all granted
</Proxy>
<Location />
Require all granted
</Location>
</VirtualHost>
此時在瀏覽器上訪問能夠正常代理到後端tomcat上了
e.在node8和node88上部署好DRBD和mysql高可用,通常狀況下這裏都是MySQL作主從,有多個從服務器,在多個從服務器時若是讀請求壓力還未減輕,則須要經過在從服務器前端添加一層讀緩存,能命中的讀請求在還未過時時或者過時後數據未改變時,都到緩存中取結果進行返回。沒有命中則須要到從服務器上查找結果,找到後緩存一份然後再進行讀返回。
準備好MySQL源碼包
# ls/usr/local/src/
mariadb-10.0.13.tar.gz
這裏是經過腳本的方式自動化實現安裝
#!/bin/bash
useradd -r -s /sbin/nologinmysql > /dev/null
#vgextend$(vgs|awk '{if(NR==2) {print $1}}') /dev/sdb > /dev/null
lvcreate -L 18G -ndata vg_lvm > /dev/null
mkdir /mysql
mkfs.ext4/dev/vg_lvm/data > /dev/null
mount/dev/vg_lvm/data /mysql
mkdir /mysql/data
chown -R mysql.mysql/mysql/data
tar -xf/usr/local/src/mariadb-10.0.13.tar.gz -C /usr/local/
yum groupinstall-y "Development tools" "Server Platform Development" >/dev/null
echo -e"\033[42mGroupinstall is OK.\033[0m"
yum install -ylibxml2-devel cmake > /dev/null
echo -e"\033[42mInstall is OK.\033[0m"
cd/usr/local/mariadb-10.0.13/
cmake .-DMYSQL_DATADIR=/mysql/data -DWITH_SSL=system -DWITH_SPHINX_STORAGE_ENGINE=1> /dev/null
echo -e"\033[42mCmake is OK.\033[0m"
make &&make install > /dev/null
echo -e "\033[42mMakeand Make install is OK.\033[0m"
cd/usr/local/mysql
echo "exportPATH=/usr/local/mysql/bin:$PATH" > /etc/profile.d/mysql.sh
source/etc/profile.d/mysql.sh
cp -fsupport-files/my-large.cnf /etc/my.cnf
sed -i'/^\[mysqld\]/a datadir=/mysql/data' /etc/my.cnf
cp/usr/local/mysql/support-files/mysql.server /etc/rc.d/init.d/mysqld
chmod +x/etc/rc.d/init.d/mysqld
chkconfig --addmysqld
chkconfig mysqldon
chown -Rroot.mysql /usr/local/mysql/*
/usr/local/mysql/scripts/mysql_install_db--user=mysql --datadir=/mysql/data > /dev/null
echo -e"033[42mMysql initial is ok.\033[0m"
service mysqldstart
ss -tnlp|grep 3306
高可用corosync+pacemaker實現
配置HA的前提:
時間同步、基於主機名互相通訊、SSH互信
安裝程序包corosync, pacemaker
這裏在MySQL高可用狀況下,須要的資源有vip , mysqld , filesystem[rsync+inotify、nfs]或者block store[DRBD、iscsi]
這裏經過DRBD來實現block store
考慮到有多個資源,組織多個資源的兩種方案:
group 定義組
constraint 定義約束
location 位置約束 服務中的多個資源更傾向於哪一個節點上
order 順序約束 服務中的多個資源的啓動及關閉次序
colocation 排列約束 服務中的多個資源在一塊兒的傾向性
安裝pacemaker時依賴corosync
#yum install -ypacemaker
生成認證文件
#corosync-keygen
直接生成key的,默認是從熵池中讀取隨機數的,若是隨機數不夠就會處於等待狀態,須要經過敲擊鍵盤來生成隨機數。經過yum安裝普通的包,也能更快產生隨機數
/etc/corosync/authkey
-r-------- 1 rootroot 128 Aug 8 15:38/etc/corosync/authkey
配置文件/etc/corosync/corosync.conf.example複製爲/etc/corosync/corosync.conf
# cp/etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
修改配置文件/etc/corosync/corosync.conf
secauth: off 是否基於安全方式認證每個節點 改成on
threads: 0 啓動的線程數,經過cpu核心數進行修改
bindnetaddr: 192.168.1.0 打算綁定在哪一個網絡地址上改成 192.168.21.0
mcastaddr: 239.255.1.1 多播地址改成 226.194.25.36[須要處於多播的範圍]
這裏兩種記錄日誌的方式都打開了
to_syslog: yes 改成no
amf 定義corosync是否啓用openais的amf機制便是否兼容其API機制,corosync必須以插件化的方式運行pacemaker,在2.0以前pacemaker是做爲corosync的插件,在2.0以後pacemaker是一個獨立的服務了。須要補充一個插件
加上以下:
service { 指定插件
ver: 0
name: pacemaker
# use_mgmtd: yes
}
aisexec { ais運行時的用戶
user: root
group: root
}
amf {
mode: disabled
}
高可用節點雙方實現互信
#ssh-keygen -t rsa-P ''
#ssh-copy-id -i/root/.ssh/id_rsa.pub nodeXX
把認證文件和配置文件複製到高可用的另一個節點上
# scp/etc/corosync/corosync.conf node88:/etc/corosync/corosync.conf
# scp/etc/corosync/authkey node88:/etc/corosync/authkey
啓動corosync
# service corosyncstart
查看corosync引擎是否正常啓動:
# grep -e"Corosync Cluster Engine" -e "configuration file"/var/log/cluster/corosync.log
Aug 08 15:56:29corosync [MAIN ] Corosync Cluster Engine('1.4.7'): started and ready to provide service.
Aug 08 15:56:29corosync [MAIN ] Successfully read mainconfiguration file '/etc/corosync/corosync.conf'.
查看初始化成員節點通知是否正常發出:
# grep TOTEM /var/log/cluster/corosync.log
Aug 08 15:56:29corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Aug 08 15:56:29corosync [TOTEM ] Initializing transmit/receive security: libtomcryptSOBER128/SHA1HMAC (mode 0).
Aug 08 15:56:29corosync [TOTEM ] The network interface [192.168.21.159] is now up.
Aug 08 15:56:29corosync [TOTEM ] A processor joined or left the membership and a newmembership was formed.
Aug 08 15:56:49corosync [TOTEM ] A processor joined or left the membership and a new membershipwas formed.
檢查啓動過程當中是否有錯誤產生。下面的錯誤信息表示packmaker不久以後將再也不做爲corosync的插件運行,所以,建議使用cman做爲集羣基礎架構服務;此處可安全忽略。
# grep ERROR:/var/log/cluster/corosync.log | grep -v unpack_resources
Aug 08 15:56:29corosync [pcmk ] ERROR:process_ais_conf: You have configured a cluster using the Pacemaker plugin forCorosync. The plugin is not supported in this environment and will be removedvery soon.
Aug 08 15:56:29corosync [pcmk ] ERROR:process_ais_conf: Please see Chapter 8of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details onusing Pacemaker with CMAN
Aug 08 15:56:32corosync [pcmk ] ERROR:pcmk_wait_dispatch: Child process cib terminated with signal 6 (pid=32326,core=true)
。。。
Aug 08 15:59:32corosync [pcmk ] ERROR:pcmk_wait_dispatch: Child process crmd exited (pid=1050, rc=201)
查看pacemaker是否正常啓動:
# grep pcmk_startup/var/log/cluster/corosync.log
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:CRM: Initialized
Aug 08 15:56:29corosync [pcmk ] Logging: Initializedpcmk_startup
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Maximum core file size is: 18446744073709551615
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Service: 9
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Local hostname: node8
pacemaker的配置接口:
crmsh centos 6.4以前使用的 suse提供的
pcs centos 6.4以後使用的 redhat提供的
這裏以crmsh爲例,crmsh依賴於pssh包
# wgethttp://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
# cp network\:ha-clustering\:Stable.repo/etc/yum.repos.d/
# yum -y installcrmsh
複製到另外一個高可用節點上
# scp/root/network\:ha-clustering\:Stable.repo node88:/etc/yum.repos.d
在高可用場景下須要注意VIP