-----本文大綱html
簡介
node
術語python
環境配置mysql
實現過程web
命令行管理工具sql
-------------apache
1、簡介bash
RHCS 即 RedHat Cluster Suite ,中文意思即紅帽集羣套件。紅帽集羣套件(RedHat Cluter Suite, RHCS)是一套綜合的軟件組件,能夠經過在部署時採用不一樣的配置,以知足你的對高可用性,負載均衡,可擴展性,文件共享和節約成本的須要。對於須要最大 正常運行時間的應用來講,帶有紅帽集羣套件(Red Hat Cluster Suite)的紅帽企業 Linux 集羣是最佳的選擇。紅帽集羣套件專爲紅帽企業 Linux 量身設計,它提供有以下兩種不一樣類型的集羣: 一、應用/服務故障切換-經過建立n個節點的服務器集羣來實現關鍵應用和服務的故障切換 二、IP 負載均衡-對一羣服務器上收到的 IP 網絡請求進行負載均衡利用紅帽集羣套件,能夠以高可用性配置來部署應用,從而使其老是處於運行狀態-這賦予了企業向外擴展(scale- out)Linux 部署的能力。對於網絡文件系統(NFS)、Samba 和Apache 等大量應用的開源應用來講,紅帽集羣套件提供了一個隨時可用的全面故障切換解決方案。服務器
2、術語網絡
分佈式集羣管理器(CMAN)
3、環境配置
系統 | 角色 | ip地址 | 安裝包 |
Centos6.5 x86_64 | 管理集羣節點端 | 192.168.1.110 | luci |
Centos6.5 x86_64 | web節點端(node1) | 192.168.1.103 | ricci |
Centos6.5 x86_64 | web節點端(node2) | 192.168.1.109 | ricci |
Centos6.5 x86_64 | web節點端(node3) | 192.168.1.108 | ricci |
4、實現過程
前提
ssh互信
注:若是yum源中有epel源要禁用,這是由於,此套件爲redhat官方只承認本身發行的版本,若是不是承認的版本,可能將沒法啓動服務
管理集羣節點端
[root@essun ~]# yum install -y luci [root@essun ~]# service luci start Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `essun.node4.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none suitable found, you can still do it manually as mentioned above) Generating a 2048 bit RSA private key writing new private key to '/var/lib/luci/certs/host.pem' Starting saslauthd: [ OK ] Start luci... [ OK ] Point your web browser to https://essun.node4.com:8084 (or equivalent) to access luci [root@essun ~]# ss -tnpl |grep 8084 LISTEN 0 5 *:8084 *:* users:(("python",2920,5)) [root@essun ~]#
而節點間要安裝ricci,而且要爲各節點上的ricci用戶建立一個密碼,以便集羣服務管理各節點,爲每個節點提供一個測試頁面(此處以一個節點爲例,其它兩個節點安裝方式同樣)
[root@essun .ssh]# yum install ricci -y [root@essun ~]# service ricci start Starting oddjobd: [ OK ] generating SSL certificates... done Generating NSS database... done Starting ricci: [ OK ] [root@essun ~]# ss -tnlp |grep ricci LISTEN 0 5 :::11111 :::* users:(("ricci",2241,3)) #ricci默認監聽於11111端口 [root@essun .ssh]# echo "ricci" |passwd --stdin ricci Changing password for user ricci. passwd: all authentication tokens updated successfully. #此處以ricci爲密碼 [root@essun html]# echo "<h1>`hostname`</h1>" >index.html [root@essun html]# service httpd start Starting httpd: [ OK ]
打開web界面就能夠配置了
輸入正確的用戶及密碼就可登陸了,若是是root登陸會有警告提示信息
這時就可使用Manager Clusters管理集羣了
建立一個集羣
建立完成後
標籤說明
定義故障轉移域
定義故障轉移域的優先級,當節點從新上線後,資源是否切換
添加後的狀態
能夠選擇的資源類型
添加一個ip地址
將此資源添加到組中(也能夠在service group定義資源)
還能夠添加資源
將己經定義的資源添加到組中(以前己經定義過的ip地址)
添加一個web服務
定義完成後就能夠提交了,若是此資源想撤銷,能夠點擊右上角remove便可
提交後的組資源
訪問測試一下,己經運行於node1上
查看資源是否轉移,能夠查看service group,也可能經過網頁測試
服務己經切換到node3上,訪問一下網頁看一下效果
資源的確己經切換了,讓node1從新上線後,資源是不會切換回到node1上的,由於在定義節時己經設置了no failback
5、命令行管理工具
一、clustat
clustat 顯示集羣狀態。它爲您提供成員信息、仲裁查看、全部高可用性服務的狀態,並給出運行 clustat 命令的節點(本地)
命令參數
[root@essun html]# clustat --help clustat: invalid option -- '-' usage: clustat <options> -i <interval> Refresh every <interval> seconds. May not be used with -x. -I Display local node ID and exit -m <member> Display status of <member> and exit -s <service> Display status of <service> and exit -v Display version and exit -x Dump information as XML -Q Return 0 if quorate, 1 if not (no output) -f Enable fast clustat reports -l Use long format for services #查看節點狀態信息 [root@essun html]# clustat -l Cluster Status for Cluster Node @ Wed May 7 11:32:55 2014 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1 1 Online, Local, rgmanager node2 2 Online node3 3 Online Service Information ------- ----------- Service Name : service:webservice Current State : started (112) Flags : none (0) Owner : node1 Last Owner : none Last Transition : Wed May 7 10:06:48 2014
二、clusvcadm
您可使用 clusvcadm 命令管理 HA 服務。使用它您能夠執行如下操做:
重啓服務
命令參數
[root@essun html]# clusvcadm usage: clusvcadm [command] Resource Group Control Commands: -v Display version and exit -d <group> Disable <group>. This stops a group until an administrator enables it again, the cluster loses and regains quorum, or an administrator-defined event script explicitly enables it again. -e <group> Enable <group> -e <group> -F Enable <group> according to failover domain rules (deprecated; always the case when using central processing) -e <group> -m <member> Enable <group> on <member> -r <group> -m <member> Relocate <group> [to <member>] Stops a group and starts it on another cluster member. -M <group> -m <member> Migrate <group> to <member> (e.g. for live migration of VMs) -q Quiet operation -R <group> Restart a group in place. -s <group> Stop <group>. This temporarily stops a group. After the next group or or cluster member transition, the group will be restarted (if possible). -Z <group> Freeze resource group. This prevents transitions and status checks, and is useful if an administrator needs to administer part of a service without stopping the whole service. -U <group> Unfreeze (thaw) resource group. Restores a group to normal operation. -c <group> Convalesce (repair, fix) resource group. Attempts to start failed, non-critical resources within a resource group. Resource Group Locking (for cluster Shutdown / Debugging): -l Lock local resource group managers. This prevents resource groups from starting. -S Show lock state -u Unlock resource group managers. This allows resource groups to start. #資源遷移 [root@essun html]# clusvcadm -r webservice -m node1 Trying to relocate service:webservice to node1...Success service:webservice is now running on node1 [root@essun html]# curl http://192.168.1.150 <h1>essun.node1.com</h1>
三、cman_tool
cman_tool是一種用來管理CMAN集羣管理子系統的工具集,cman_tool能夠用來添加集羣節點,殺死另外一個集羣節點或改變預期集羣的選票的價值。
注意:cman_tool發出的命令會影響你的集羣中的全部節點。
命令參數
[root@essun html]# cman_tool -h Usage: cman_tool <join|leave|kill|expected|votes|version|wait|status|nodes|services|debug> [options] Options: -h Print this help, then exit -V Print program version information, then exit -d Enable debug output join Cluster & node information is taken from configuration modules. These switches are provided to allow those values to be overridden. Use them with extreme care. -m <addr> Multicast address to use -v <votes> Number of votes this node has -e <votes> Number of expected votes for the cluster -p <port> UDP port number for cman communications -n <nodename> The name of this node (defaults to hostname) -c <clustername> The name of the cluster to join -N <id> Node id -C <module> Config file reader (default: xmlconfig) -w Wait until node has joined a cluster -q Wait until the cluster is quorate -t Maximum time (in seconds) to wait -k <file> Private key file for Corosync communications -P Don't set corosync to realtime priority -X Use internal cman defaults for configuration -A Don't load openais services -D<fail|warn|none> What to do about the config. Default (without -D) is to validate the config. with -D no validation will be done. -Dwarn will print errors but allow the operation to continue. -z Disable stderr debugging output. wait Wait until the node is a member of a cluster -q Wait until the cluster is quorate -t Maximum time (in seconds) to wait leave -w If cluster is in transition, wait and keep trying -t Maximum time (in seconds) to wait remove Tell other nodes to ajust quorum downwards if necessary force Leave even if cluster subsystems are active kill -n <nodename> The name of the node to kill (can specify multiple times) expected -e <votes> New number of expected votes for the cluster votes -v <votes> New number of votes for this node status Show local record of cluster status nodes Show local record of cluster nodes -a Also show node address(es) -n <nodename> Only show information for specific node -F <format> Specify output format (see man page) services Show local record of cluster services version -r Reload cluster.conf and update config version. -D <fail,warn,none> What to do about the config. Default (without -D) is to validate the config. with -D no validation will be done. -Dwarn will print errors but allow the operation to continue -S Don't run ccs_sync to distribute cluster.conf (if appropriate) #查看節點屬性 [root@essun html]# cman_tool status Version: 6.2.0 Config Version: 6 Cluster Name: Cluster Node Cluster Id: 26887 Cluster Member: Yes Cluster Generation: 36 Membership state: Cluster-Member Nodes: 3 Expected votes: 3 Total votes: 3 Node votes: 1 Quorum: 2 Active subsystems: 9 Flags: Ports Bound: 0 11 177 Node name: node1 Node ID: 1 Multicast addresses: 239.192.105.112 Node addresses: 192.168.1.103