主機環境 redhat6.5 64位node
實驗環境 服務端1 ip 172.25.29.1 主機名:server1.example.com ricciweb
服務端2 ip 172.25.29.2 主機名:server2.example.com riccivim
管理端1 ip 172.25.29.3 主機名:server3.example.com luci緩存
管理端2 ip 172.25.29.250 fence_virtd安全
防火牆狀態:關閉服務器
1. 安裝ricci、luci及建立節點app
1.安裝、開啓ricci(服務端1)dom
[root@server1yum.repos.d]# vim dvd.repo #安裝以前,修改yum源ide
#repos on instructor for cla***oom use測試
#Main rhel6.5 server
[base]
name=InstructorServer Repository
baseurl=http://172.25.29.250/rhel6.5
gpgcheck=0
#HighAvailability rhel6.5
[HighAvailability]
name=InstructorHighAvailability Repository
baseurl=http://172.25.29.250/rhel6.5/HighAvailability
gpgcheck=0
#LoadBalancer packages
[LoadBalancer]
name=InstructorLoadBalancer Repository
baseurl=http://172.25.29.250/rhel6.5/LoadBalancer
gpgcheck=0
#ResilientStorage
[ResilientStorage]
name=InstructorResilientStorage Repository
baseurl=http://172.25.29.250/rhel6.5/ResilientStorage
gpgcheck=0
#ScalableFileSystem
[ScalableFileSystem]
name=InstructorScalableFileSystem Repository
baseurl=http://172.25.29.250/rhel6.5/ScalableFileSystem
gpgcheck=0
[root@server1yum.repos.d]# yum clean all #清除緩存
Loadedplugins: product-id, subscription-manager
Thissystem is not registered to Red Hat Subscription Management. You can usesubscription-manager to register.
Cleaningrepos: HighAvailability LoadBalancer ResilientStorage
: ScalableFileSystem base
Cleaningup Everything
[root@server1yum.repos.d]# yum install ricci -y #安裝ricci
[root@server1yum.repos.d]# passwd ricci #修改ricci密碼
Changingpassword for user ricci.
Newpassword:
BADPASSWORD: it is based on a dictionary word
BADPASSWORD: is too simple
Retypenew password:
passwd:all authentication tokens updated successfully.
[root@server1yum.repos.d]# /etc/init.d/ricci start #開啓ricci
Startingsystem message bus: [ OK ]
Startingoddjobd: [ OK ]
generatingSSL certificates... done
GeneratingNSS database... done
Startingricci: [ OK ]
[root@server1yum.repos.d]# chkconfig ricci on #開機自動開啓
服務端2和服務端1作相同的操做
2.安裝、開啓luci (管理端1)
安裝以前,與服務端1同樣修改yum源
[root@server3yum.repos.d]# yum install luci -y #安裝luci
[root@server3yum.repos.d]# /etc/init.d/luci start #開啓ruci
Startluci... [ OK ]
Pointyour web browser to https://server3.example.com:8084 (or equivalent) to accessluci
在登錄以前,必須有DNS解析,也就是在/etc/hosts添加
例如: 172.25.29.3 server3.example.com
3.建立節點
登錄 https://server3.example.com:8084 #luci開放的是8084端口
安全證書,選I Understand Risks
點擊Confirm Security Excepton
進入到管理服務器的luci界面,登錄時的密碼是安裝luci虛擬機的root密碼
選擇Manage Clusters,以後點擊Create建立集羣
如圖,Cluster Name建立集羣的名稱,勾選Use the Same Passwordfor All Nodes,指的是全部結點所用的是相同的密碼,填寫要建立的結點名稱和密碼,名稱是服務端的主機名,密碼是上面提到的passwd ricci的修改的密碼。勾選Download PackagesReboot和Enable,選擇Create Cluster
正在建立節點,如圖
建立完成,如圖
建立完成以後,在服務端1和服務端2的/etc/cluster/下會生成cluster.conf文件,查看以下
[root@server1~]# cd /etc/cluster/
[root@server1cluster]# ls
cluster.conf cman-notify.d
[root@server1cluster]# cat cluster.conf #查看文件內容
<?xmlversion="1.0"?>
<clusterconfig_version="1" name="wen"> #集羣名稱
<clusternodes>
<clusternodename="server1.example.com" nodeid="1"/> #結點1
<clusternodename="server2.example.com" nodeid="2"/> #結點2
</clusternodes>
<cman expected_votes="1"two_node="1"/>
<fencedevices/>
<rm/>
</cluster>
2.安裝fence_virtd、建立fence設備
1.安裝、開啓fence_virtd(管理端2)
[root@foundation29Desktop]# yum install fence-virtd* -y
[root@foundation29Desktop]# fence_virtd -c #設置
Modulesearch path [/usr/lib64/fence-virt]:
Availablebackends:
libvirt 0.1
Availablelisteners:
multicast 1.2
serial 0.4
Listenermodules are responsible for accepting requests
fromfencing clients.
Listenermodule [multicast]: #多播
Themulticast listener module is designed for use environments
wherethe guests and hosts may communicate over a network using
multicast.
Themulticast address is the address that a client will use to
sendfencing requests to fence_virtd.
MulticastIP Address [225.0.0.12]: #多播ip
Usingipv4 as family.
MulticastIP Port [1229]: #多播端口號
Settinga preferred interface causes fence_virtd to listen only
onthat interface. Normally, it listens onall interfaces.
Inenvironments where the virtual machines are using the host
machineas a gateway, this *must* be set (typically to virbr0).
Setto 'none' for no interface.
Interface[br0]:
Thekey file is the shared key information which is used to
authenticatefencing requests. The contents of thisfile must
bedistributed to each physical host and virtual machine within
acluster.
KeyFile [/etc/cluster/fence_xvm.key]: #key文件的路徑
Backendmodules are responsible for routing requests to
theappropriate hypervisor or management layer.
Backendmodule [libvirt]:
Configurationcomplete.
===Begin Configuration ===
backends{
libvirt {
uri = "qemu:///system";
}
}
listeners{
multicast {
port = "1229";
family = "ipv4";
interface = "br0";
address = "225.0.0.12";
key_file ="/etc/cluster/fence_xvm.key";
}
}
fence_virtd{
module_path ="/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
===End Configuration ===
Replace/etc/fence_virt.conf with the above [y/N]? y
[root@foundation29Desktop]# mkdir /etc/cluster #建立cluster目錄
[root@foundation29Desktop]# cd /etc/cluster/
[root@foundation29cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1 #生成文件
1+0records in
1+0records out
128bytes (128 B) copied, 0.000167107 s, 766 kB/s
[root@foundation29Desktop]# scp fence_xvm.key root@172.25.29.1:/etc/cluster/ #將文件傳給服務端1
root@172.25.29.2'spassword:
fence_xvm.key 100% 512 0.5KB/s 00:00
測試
[root@server1cluster]# ls #查看
cluster.conf cman-notify.d fence_xvm.key
以一樣的方法傳送給服務端2
[root@foundation29Desktop]# systemctl start fence_fence_virtd.service #開啓fence(因爲管理端2是7.1的系統,開啓時的命令不太一。若是是6.5系統,則用/etc/init.d/fence_virtd start便可)
2.建立fence設備
選擇Fance Devices
選擇Add,以下圖,Name指的是添加fence設備的名稱,寫完以後選擇Submit
結果
選擇server1.example.com
點擊Add Fence Method ,添加Method Name
如圖,選擇Add Tence Instance
填寫Domin,選擇Submit
完成以後,服務端2和服務端1的配置相同
測試
[root@server1cluster]# fence_node server2.example.com #測試server2的結點
fenceserver2.example.com success
[root@server2cluster]# fence_node server1.example.com #測試server1的結點
fenceserver1.example.com success
至此,高可用集羣(HA)搭建完畢,能夠根據本身的須要在上面鋪設本身所須要的業務。