配置步驟:php
(1) 安裝與配置DRBD編譯安裝Master-servernode
(2)安裝配置使用pcs安裝corosync+pacemakerpython
(3)安裝crm配置安裝mfs+DRBD+corosync+pacemaker的高可用集羣linux
(4)編譯安裝Chunk-server和Matelogger主機nginx
(5)安裝mfs客戶端測試高可用集羣git
(我的以爲仍是先安裝好drbd,而後安裝master-server,最後才安裝chunk-server和matelogger主機。由於以前的配置的時候出現過掛載目錄寫不進數據的狀況,後來通過排查最終把drbd的掛載磁盤格式化後從新安裝chunk-server和matelogger主機。)github
DRBD:web
DRBD是一個用軟件實現的、無共享的、服務器之間鏡像塊設備內容的存儲複製解決方案。 DRBD Logo數據鏡像:實時、透明、同步(全部服務器都成功後返回)、異步(本地服務器成功後返回)。DBRD的核心功能經過Linux的內核實現,最接近系統的IO棧,但它不能神奇地添加上層的功能好比檢測到EXT3文件系統的崩潰。DBRD的位置處於文件系統如下,比文件系統更加靠近操做系統內核及IO棧。express
MooseFS:編程
MooseFS(mfs)被稱爲對象存儲,提供了強大的擴展性、高可靠性和持久性。它可以將文件分佈存儲於不一樣的物理機器上,對外卻提供的是一個透明的接口的存儲資源池。它還具備在線擴展(這是個很大的好處)、文件切塊存儲、讀寫效率高等特色。
MFS分佈式文件系統由元數據服務器(Master Server)、元數據日誌服務器(Metalogger Server)、數據存儲服務器(Chunk Server)、客戶端(Client)組成。
(1)元數據服務器:MFS系統中的核心組成部分,存儲每一個文件的元數據,負責文件的讀寫調度、空間回收和在多個chunk server之間的數據拷貝等。目前MFS僅支持一個元數據服務器,所以可能會出現單點故障。針對此問題咱們須要用一臺性能很穩定的服務器來做爲咱們的元數據服務器,這樣能夠下降出現單點故障的機率。
(2) 元數據日誌服務器:元數據服務器的備份節點,按照指定的週期從元數據服務器上將保存元數據、更新日誌和會話信息的文件下載到本地目錄下。當元數據服務器出現故障時,咱們能夠從該服務器的文件中拿到相關的必要的信息對整個系統進行恢復。
此外,利用元數據進行備份是一種常規的日誌備份手段,這種方法在某些狀況下並不能完美的接管業務,仍是會形成數據丟失。這次將採用經過iSCSI共享磁盤對元數據節點作雙機熱備。
(3) 數據存儲服務器:負責鏈接元數據管理服務器,遵從元數據服務器的調度,提供存儲空間,併爲客戶端提供數據傳輸,MooseFS提供一個手動指定每一個目錄的備份個數。假設個數爲n,那麼咱們在向系統寫入文件時,系統會將切分好的文件塊在不一樣的chunk server上覆制n份。備份數的增長不會影響系統的寫性能,可是能夠提升系統的讀性能和可用性,這能夠說是一種以存儲容量換取寫性能和可用性的策略。
(4) 客戶端:使用mfsmount的方式經過FUSE內核接口掛接遠程管理服務器上管理的數據存儲服務器到本地目錄上,而後就能夠像使用本地文件同樣來使用咱們的MFS文件系統了。
我的總結筆記:
分佈式存儲:要使用源數據作(調度的做用),因此源數據也要作高可用
ceph:雲,openstack,kubernats,剛出來,可能不太穩定
glusterfs:存儲大文件。支持塊設備,FUSE,直接掛載
mogilefs:性能高,海量小文件。可是FUSE性能不太好,須要折騰。支持對象存儲,須要編程語言調用API,最大優點是有api
fastDFS:mogilefs的C語言實現形式,國人開發,不支持FUSE..存儲內存,也支持海量小文件,都存在內存裏面,因此很快(相對缺陷很大)
HDFS:海量大文件。(google的)
moosefs:(此次主要介紹由於國內比較受歡迎)存儲海量小文件,支持FUSE.加服務器把ip指向源數據服務器就自動作成ha。
經常使用高可用集羣解決方案:
Heatbeat+peachmaker:已慢慢淘汰
Cman+rgmanager
Cman+pacemaker
Corosync+pacemaker(corosync:提供信息傳遞、不作任何事情。只作心跳檢測。Pacemaker:只做爲資源管理器)
cman+clvm(通常作磁盤塊的高可用cman:也逐漸淘汰,由於corosync有個優秀的投票機制。)
環境介紹:
系統版本: centos7
Yum源:http://mirrors.aliyun.com/repo/
cml1=Master Server(master):192.168.5.101 (VIP:192.168.5.200)
cml2=Master Server(slave):192.168.5.102
cml5=Chunk server:192.168.5.104
cml4=Chunk server:192.168.5.105
cml5=Metalogger Server:192.168.5.103
cml6=Client:192.168.5.129
一、修改hosts文件保證hosts之間可以互相訪問:
[root@cml1 ~]#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4
::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
192.168.5.101 cml1 mfsmaster
192.168.5.102 cml2
192.168.5.103 cml5
192.168.5.104 cml3
192.168.5.105 cml4
192.168.5.129 cml6
二、修改ssh互信:
[root@cml1 ~]#ssh-keygen [root@cml1 ~]#ssh-copy-id cml2
三、設置時鐘同步:
[root@cml1 ~]#crontab -l */5 * * * *ntpdate cn.pool.ntp.org
四、安裝derb:
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm # yum install -y kmod-drbd84 drbd84-utils
五、主配置文件:
/etc/drbd.conf#主配置文件
/etc/drbd.d/global_common.conf#全局配置文件
六、查看主配置文件:
[root@cml1 ~]#cat /etc/drbd.conf
# You can findan example in /usr/share/doc/drbd.../drbd.conf.example
include"drbd.d/global_common.conf";
include"drbd.d/*.res";
七、配置文件說明:
[root@cml1 ~]#vim /etc/drbd.d/global_common.conf global { usage-count no; #是否參加DRBD使用統計,默認爲yes。官方統計drbd的裝機量 # minor-count dialog-refreshdisable-ip-verification } common { protocol C; #使用DRBD的同步協議 handlers { pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;halt -f"; } startup { # wfc-timeout degr-wfc-timeoutoutdated-wfc-timeout wait-after-sb } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #配置I/O錯誤處理策略爲分離 # size max-bio-bvecs on-io-errorfencing disk-barrier disk-flushes # disk-drain md-flushes resync-rateresync-after al-extents # c-plan-ahead c-delay-targetc-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-sizemax-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-sizeko-count # allow-two-primaries cram-hmac-algshared-secret after-sb-0pri # after-sb-1pri after-sb-2prialways-asbp rr-conflict # ping-timeout data-integrity-algtcp-cork on-congestion # congestion-fill congestion-extentscsums-alg verify-alg # use-rle } syncer { rate 1024M; #設置主備節點同步時的網絡速率 } }
八、建立配置文件:
[root@cml1 ~]#cat /etc/drbd.d/mfs.res resource mfs { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on cml1 { disk /dev/sdb1; address 192.168.5.101:7789; } on cml2 { disk /dev/sdb1; address 192.168.5.102:7789; } }
九、而後把配置文件copy到對面的機器上:
scp -rp /etc/drbd.d/* cml2:/etc/drbd.d/
十、在cml1上面啓動:
[root@cml1~]# drbdadm create-md mfs initializingactivity log initializingbitmap (160 KB) to all zero Writing metadata... New drbd metadata block successfully created. [root@cml1 ~]#modprobe drbd ##查看內核是否已經加載了模塊: [root@cml1drbd.d]# lsmod | grep drbd drbd 396875 1 libcrc32c 12644 4 xfs,drbd,ip_vs,nf_conntrack ### [root@cml1 ~]#drbdadm up mfs [root@cml1 ~]#drbdadm -- --force primary mfs 查看狀態: [root@cml1 ~]#cat /proc/drbd version:8.4.10-1 (api:1/proto:86-101) GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-1514:23:22 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0ua:0 ap:0 ep:1 wo:f oos:5240636
十、在對端(cml2)節點執行:
[root@cml2 ~]# drbdadm create-md mfs [root@cml2 ~]# modprobe drbd [root@cml2 ~]# drbdadm up mfs
十一、格式化並掛載:
[root@cml1 ~]#mkfs.ext4 /dev/drbd1 [root@cml1 ~]#mkdir /usr/local/mfs [root@cml1 ~]#mount /dev/drbd1 /usr/local/mfs [root@cml1 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-rootxfs 19G 6.8G 13G 36% / devtmpfs devtmpfs 501M 0 501M 0% /dev tmpfs tmpfs 512M 56M 456M 11% /dev/shm tmpfs tmpfs 512M 33M 480M 7% /run tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup /dev/sda1 xfs 521M 160M 362M 31% /boot tmpfs tmpfs 103M 0 103M 0% /run/user/0 /dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs
####注意要想使得從能夠掛載,咱們必須,先把主切換成叢,而後再到從上面掛載:
###查看狀態:
[root@cml1 ~]#cat /proc/drbd version:8.4.10-1 (api:1/proto:86-101) GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-1514:23:22 1: cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r----- ns:520744 nr:0 dw:252228 dr:300898 al:57bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
十二、安裝與配置Master Server:
##MFS安裝:下載3.0包:
[root@cml1src]# yum install zlib-devel -y [root@cml1src]# wget https://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz
(1)安裝master:
[root@cml1moosefs-3.0.96]# useradd mfs [root@cml1src]# tar -xf moosefs.3.0.96.tar.gz [root@cml1src]# cd moosefs-3.0.96/ [root@cml1moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount [root@cml1moosefs-3.0.96]# ls /usr/local/mfs/ bin etc sbin share var
(etc和var目錄裏面存放的是配置文件和MFS的數據結構信息,所以請及時作好備份,防止災難損毀。作了Master Server雙機以後,就能夠解決這個問題。)
##注意:全部主機上的mfs,用戶id和組id要同樣:
(2)配置master:
[root@cml1mfs]# pwd /usr/local/mfs/etc/mfs [root@cml1mfs]# ls mfsexports.cfg.sample mfsmaster.cfg.sample mfsmetalogger.cfg.sample mfstopology.cfg.sample
##都是sample文件,因此咱們要命名成.cfg文件:
[root@cml1mfs]# cp mfsexports.cfg.sample mfsexports.cfg [root@cml1mfs]# cp mfsmaster.cfg.sample mfsmaster.cfg
(3)看一下默認配置的參數:
[root@cml1mfs]# vim mfsmaster.cfg
# WORKING_USER = mfs # 運行 master server 的用戶
# WORKING_GROUP = mfs # 運行 master server 的組
# SYSLOG_IDENT = mfsmaster # 是master server在syslog中的標識,也就是說明這是由master serve產生的
# LOCK_MEMORY = 0 # 是否執行mlockall()以免mfsmaster 進程溢出(默認爲0)
# NICE_LEVEL = -19 # 運行的優先級(若是能夠默認是 -19; 注意: 進程必須是用root啓動)
# EXPORTS_FILENAME = /usr/local/mfs-1.6.27/etc/mfs/mfsexports.cfg # 被掛載目錄及其權限控制文件的存放路徑
# TOPOLOGY_FILENAME = /usr/local/mfs-1.6.27/etc/mfs/mfstopology.cfg # mfstopology.cfg文件的存放路徑
# DATA_PATH = /usr/local/mfs-1.6.27/var/mfs # 數據存放路徑,此目錄下大體有三類文件,changelog,sessions和stats;
# BACK_LOGS = 50 # metadata的改變log文件數目(默認是 50)
# BACK_META_KEEP_PREVIOUS = 1 # metadata的默認保存份數(默認爲1)
# REPLICATIONS_DELAY_INIT = 300 # 延遲複製的時間(默認是300s)
# REPLICATIONS_DELAY_DISCONNECT = 3600 # chunkserver斷開的複製延遲(默認是3600)
# MATOML_LISTEN_HOST = * # metalogger監聽的IP地址(默認是*,表明任何IP)
# MATOML_LISTEN_PORT = 9419 # metalogger監聽的端口地址(默認是9419)
# MATOML_LOG_PRESERVE_SECONDS = 600
# MATOCS_LISTEN_HOST = * # 用於chunkserver鏈接的IP地址(默認是*,表明任何IP)
# MATOCS_LISTEN_PORT = 9420 # 用於chunkserver鏈接的端口地址(默認是9420)
# MATOCL_LISTEN_HOST = * # 用於客戶端掛接鏈接的IP地址(默認是*,表明任何IP)
# MATOCL_LISTEN_PORT = 9421 # 用於客戶端掛接鏈接的端口地址(默認是9421)
# CHUNKS_LOOP_MAX_CPS = 100000 # chunks的最大回環頻率(默認是:100000秒)
# CHUNKS_LOOP_MIN_TIME = 300 # chunks的最小回環頻率(默認是:300秒)
# CHUNKS_SOFT_DEL_LIMIT = 10 # 一個chunkserver中soft最大的可刪除數量爲10個
# CHUNKS_HARD_DEL_LIMIT = 25 # 一個chuankserver中hard最大的可刪除數量爲25個
# CHUNKS_WRITE_REP_LIMIT = 2 # 在一個循環裏複製到一個chunkserver的最大chunk數目(默認是1)
# CHUNKS_READ_REP_LIMIT = 10 # 在一個循環裏從一個chunkserver複製的最大chunk數目(默認是5)
# ACCEPTABLE_DIFFERENCE = 0.1 # 每一個chunkserver上空間使用率的最大區別(默認爲0.01即1%)
# SESSION_SUSTAIN_TIME = 86400 # 客戶端會話超時時間爲86400秒,即1天
# REJECT_OLD_CLIENTS = 0 # 彈出低於1.6.0的客戶端掛接(0或1,默認是0)
##由於是官方的,默認配置,咱們投入便可使用。
(4)修改控制文件:
[root@cml1mfs]# vim mfsexports.cfg * / rw,alldirs,maproot=0,password=cml * . rw
##mfsexports.cfg文件中,每個條目就是一個配置規則,而每個條目又分爲三個部分,其中第一部分是mfs客戶端的ip地址或地址範圍,第二部分是被掛載的目錄,第三個部分用來設置mfs客戶端能夠擁有的訪問權限。
(5)開啓元數據文件默認是empty文件,須要咱們手工打開:
[root@cml1mfs]# cp /usr/local/mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs
(6)啓動master:
[root@cml1mfs]# /usr/local/mfs/sbin/mfsmaster start open fileslimit has been set to: 16384 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked initializingmfsmaster modules ... exports filehas been loaded mfstopologyconfiguration file (/usr/local/mfs/etc/mfstopology.cfg) not found - usingdefaults loadingmetadata ... metadata filehas been loaded no charts datafile - initializing empty charts master<-> metaloggers module: listen on *:9419 master<-> chunkservers module: listen on *:9420 main masterserver module: listen on *:9421 mfsmasterdaemon initialized properly
(7)檢查進程是否啓動:
[root@cml1mfs]# ps -ef | grep mfs
mfs 8109 1 5 18:40 ? 00:00:02/usr/local/mfs/sbin/mfsmaster start
root 8123 1307 0 18:41 pts/0 00:00:00 grep --color=auto mfs
(8)查看端口:
[root@cml1mfs]# netstat -ntlp
Active Internetconnections (only servers)
Proto Recv-QSend-Q Local Address ForeignAddress State PID/Program name
tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 8109/mfsmaster
(9)當關閉的時候直接使用:
[root@cml1mfs]# /usr/local/mfs/sbin/mfsmaster stop sendingSIGTERM to lock owner (pid:8109) waiting fortermination terminated
##pcs相關配置:(由於在7版本,因此pcs支持比較好,crmsh比較複雜)
一、兩個結點上執行:
[root@cml1corosync]# yum install -y pacemaker pcs psmisc policycoreutils-python
二、啓動pcs而且讓開機啓動:
[root@cml1corosync]# systemctl start pcsd.service [root@cml1corosync]# systemctl enable pcsd
三、修改用戶hacluster的密碼:
[root@cml1corosync]# echo 123456 | passwd --stdin hacluster
四、註冊pcs集羣主機(默認註冊使用用戶名hacluster,和密碼):
[root@cml1corosync]# pcs cluster auth cml1 cml2 ##設置註冊那個集羣節點 cml2: Alreadyauthorized cml1: Alreadyauthorized
五、在集羣上註冊兩臺集羣:
[root@cml1corosync]# pcs cluster setup --name mycluster cml1 cml2 --force
##設置集羣
六、接下來就在某個節點上已經生成來corosync配置文件:
[root@cml1corosync]# ls corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d
#咱們看到生成來corosync.conf配置文件:
七、咱們看一下注冊進來的文件:
[root@cml1corosync]# cat corosync.conf totem { version: 2 secauth: off cluster_name: mycluster transport: udpu } nodelist { node { ring0_addr: cml1 nodeid: 1 } node { ring0_addr: cml2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes } 八、啓動集羣: [root@cml1corosync]# pcs cluster start --all cml1: StartingCluster... cml2: StartingCluster...
##至關於啓動來pacemaker和corosync:
九、能夠查看集羣是否有錯:
[root@cml1corosync]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITHresources have been defined error: unpack_resources: Either configure some or disable STONITHwith the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data needSTONITH to ensure data integrity Errors found duringcheck: config not valid
##由於咱們沒有配置STONITH設備,因此咱們下面要關閉
十、關閉STONITH設備:
[root@cml1corosync]# pcs property set stonith-enabled=false [root@cml1corosync]# crm_verify -L -V [root@cml1corosync]# pcs property list ClusterProperties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.16-12.el7_4.2-94ff4df have-watchdog: false stonith-enabled: false
一、安裝crmsh:
集羣咱們能夠下載安裝crmsh來操做(從github來下載,而後解壓直接安裝):只在一個節點安裝便可。(但最好選擇兩節點上安裝這樣測試時方便點)
[root@cml1 ~]#cd /usr/local/src/ You have newmail in /var/spool/mail/root [root@cml1src]# ls nginx-1.12.0 php-5.5.38.tar.gz crmsh-2.3.2.tar nginx-1.12.0.tar.gz zabbix-3.2.7.tar.gz [root@cml1src]# tar -xf crmsh-2.3.2.tar [root@cml1crmsh-2.3.2]# python setup.py install
二、用crmsh來管理:
[root@cml1 ~]#crm help
Help overview forcrmsh
Available topics:
Overview Help overview for crmsh
Topics Available topics
Description Program description
CommandLine Command line options
Introduction Introduction
Interface User interface
Completion Tab completion
Shorthand Shorthand syntax
Features Features
Shadows Shadow CIB usage
Checks Configuration semantic checks
Templates Configuration templates
Testing Resource testing
Security Access Control Lists (ACL)
Resourcesets Syntax: Resource sets
AttributeListReferences Syntax:Attribute list references
AttributeReferences Syntax: Attributereferences
RuleExpressions Syntax: Ruleexpressions
Lifetime Lifetime parameter format
Reference Command reference
三、藉助crm管理工具配置DRBD+nfs+corosync+pacemaker高可用集羣:
##先卸載掉掛載點和停掉drbd服務
[root@cml1 ~]#systemctl stop drbd [root@cml1 ~]#umount /usr/local/mfs/ [root@cml2 ~]#systemctl stop drbd
[root@cml1 ~]#crm crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:15:54 2017 Last change:Fri Oct 27 10:52:35 2017 by root via cibadmin on cml1 2 nodesconfigured 5 resourcesconfigured Online: [ cml25pxl2 ] No resources crm(live)configure#property stonith-enabled=false crm(live)configure#property no-quorum-policy=ignore crm(live)configure#property migration-limit=1 ###表示服務搶佔一次不成功就給另外一個節點接管 crm(live)#configure
四、寫一個mfsmaster的啓動腳本:
[root@cml1mfs]# cat /etc/systemd/system/mfsmaster.service [Unit] Description=mfs After=network.target [Service] Type=forking ExecStart=/usr/local/mfs/sbin/mfsmaster start ExecStop=/usr/local/mfs/sbin/mfsmaster stop PrivateTmp=true [Install] WantedBy=multi-user.target
##開機啓動:
[root@cml1mfs]# systemctl enable mfsmaster
##中止mfsmaster服務
[root@cml1mfs]# systemctl stop mfsmaster
五、開啓工具:
[root@cml1src]# systemctl start corosync [root@cml1src]# systemctl start pacemaker [root@cml1src]# ssh cml2 systemctl start corosync [root@cml1src]# ssh cml2 systemctl start pacemaker
六、配置資源:
crm(live)configure#primitive mfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitorrole=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20op start timeout=240 op stop timeout=100 crm(live)configure#verify crm(live)configure#ms ms_mfs_drbd mfs_drbd meta master-max="1"master-node-max="1" clone-max="2"clone-node-max="1" notify="true" crm(live)configure#verify crm(live)configure#commit
七、配置掛載資源:
crm(live)configure#primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd1directory=/usr/local/mfs fstype=ext4 op start timeout=60 op stop timeout=60 crm(live)configure#verify crm(live)configure#colocation ms_mfs_drbd_with_mystore inf: mystore ms_mfs_drbd crm(live)configure#order ms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mystore:start
八、配置mfs資源:
crm(live)configure#primitive mfs systemd:mfsmaster op monitor timeout=100 interval=30 op starttimeout=30 interval=0 op stop timeout=30 interval=0 crm(live)configure#colocation mfs_with_mystore inf: mfs mystore crm(live)configure#order mystor_befor_mfs Mandatory: mystore mfs crm(live)configure#verify crm(live)configure#commit
九、配置VIP:
crm(live)configure#primitive vip ocf:heartbeat:IPaddr params ip=192.168.5.200 crm(live)configure#colocation vip_with_msf inf: vip mfs crm(live)configure#verify crm(live)configure#commit
十、查看配置:
crm(live)configure#show node 1: cml1 \ attributes standby=off node 2: cml2 \ attributes standby=off primitive mfssystemd:mfsmaster \ op monitor timeout=100 interval=30 \ op start timeout=30 interval=0 \ op stop timeout=30 interval=0 primitivemfs_drbd ocf:linbit:drbd \ params drbd_resource=mfs \ op monitor role=Master interval=10timeout=20 \ op monitor role=Slave interval=20timeout=20 \ op start timeout=240 interval=0 \ op stop timeout=100 interval=0 primitivemystore Filesystem \ params device="/dev/drbd1"directory="/usr/local/mfs" fstype=ext4 \ op start timeout=60 interval=0 \ op stop timeout=60 interval=0 primitive vipIPaddr \ params ip=192.168.5.200 ms ms_mfs_drbdmfs_drbd \ meta master-max=1 master-node-max=1clone-max=2 clone-node-max=1 notify=true colocationmfs_with_mystore inf: mfs mystore orderms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mystore:start colocationms_mfs_drbd_with_mystore inf: mystore ms_mfs_drbd ordermystor_befor_mfs Mandatory: mystore mfs colocationvip_with_msf inf: vip mfs propertycib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.16-12.el7_4.4-94ff4df \ cluster-infrastructure=corosync \ cluster-name=webcluster \ stonith-enabled=false \ no-quorum-policy=ignore \ migration-limit=1 crm(live)configure#commit crm(live)configure#cd crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:27:23 2017 Last change:Fri Oct 27 10:52:35 2017 by root via cibadmin on cml1 2 nodesconfigured 5 resourcesconfigured Online: [ cml25pxl2 ] Full list ofresources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ cml1 ] Slaves: [ cml2 ] mystore (ocf::heartbeat:Filesystem): Started cml1 mfs (systemd:mfsmaster): Started cml1 vip (ocf::heartbeat:IPaddr): Started cml1
##檢查是否已經掛載到cml1主機上
[root@cml1 ~]#df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-rootxfs 19G 6.8G 13G 36% /
devtmpfs devtmpfs 501M 0 501M 0% /dev
tmpfs tmpfs 512M 41M 472M 8% /dev/shm
tmpfs tmpfs 512M 33M 480M 7% /run
tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup
/dev/sda1 xfs 521M 160M 362M 31% /boot
tmpfs tmpfs 103M 0 103M 0% /run/user/0
/dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs
[root@cml1 ~]#ip addr
2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:4d:47:ed brdff:ff:ff:ff:ff:ff
inet 192.168.5.101/24 brd 192.168.5.255scope global ens34
valid_lft forever preferred_lft forever
inet 192.168.5.200/24brd 192.168.5.255 scope global secondary ens34
##vip已經被cml1(master)接管了。
1、安裝Metalogger Server: (這步驟在cml5上配置,其實作了mfsmaster高可用能夠不須要這步驟了。)
前面已經介紹了,MetaloggerServer 是 Master Server 的備份服務器。所以,Metalogger Server 的安裝步驟和 Master Server 的安裝步驟相同。而且,最好使用和 Master Server 配置同樣的服務器來作 Metalogger Server。這樣,一旦主服務器master宕機失效,咱們只要導入備份信息changelogs到元數據文件,備份服務器可直接接替故障的master繼續提供服務。
一、從master把包copy過來:
[root@cml1mfs]# scp /usr/local/src/v3.0.96.tar.gz cml5:/usr/local/src/ v3.0.96.tar.gz [root@cml5src]# tar -xf moosefs.3.0.96.tar.gz [root@cml5moosefs-3.0.96]# useradd mfs [root@cml5moosefs-3.0.96]# yum install zlib-devel -y [root@cml5moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount [root@cml5moosefs-3.0.96]# make && make install
二、配置Metalogger Server:
[root@cml5mfs]# cd /usr/local/mfs/etc/mfs/ [root@cml5mfs]# ls mfsexports.cfg.sample mfsmaster.cfg.sample mfsmetalogger.cfg.sample mfstopology.cfg.sample [root@cml5mfs]# cp mfsmetalogger.cfg.sample mfsmetalogger.cfg [root@cml5mfs]# vim mfsmetalogger.cfg MASTER_HOST =192.168.5.200 ##指向vip #MASTER_PORT = 9419 ##連接端口 # META_DOWNLOAD_FREQ = 24 # #元數據備份文件下載請求頻率,默認爲24小時,即每一個一天從元數據服務器下載一個metadata.mfs.back文件。當元數據服務器關閉或者出故障時,metadata.mfs.back文件將小時,那麼要恢復整個mfs,則須要從metalogger服務器取得該文件。請注意該文件,它與日誌文件在一塊兒,纔可以恢復整個被損壞的分佈式文件系統。
三、啓動Metalogger Server:
[root@cml5 ~]#/usr/local/mfs/sbin/mfsmetalogger start open fileslimit has been set to: 4096 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked initializingmfsmetalogger modules ... mfsmetaloggerdaemon initialized properly [root@cml5 ~]#netstat -lantp|grep metalogger tcp 0 0 192.168.113.144:45620 192.168.113.143:9419 ESTABLISHED 1751/mfsmetalogger [root@cml5 ~]#netstat -lantp|grep 9419 tcp 0 0 192.168.113.144:45620 192.168.113.143:9419 ESTABLISHED 1751/mfsmetalogger
四、查看一下生成的日誌文件:
[root@cml5 ~]#ls /usr/local/mfs/var/mfs/ changelog_ml_back.0.mfs changelog_ml_back.1.mfs metadata.mfs.empty metadata_ml.mfs.back
2、安裝chunk servers(注意在cml5和cml4主機上作相同的配置)
一、下載包編譯安裝
[root@cml3 ~]#useradd mfs ##注意uid和gid必須整個集羣都要相同的 [root@cml3 ~]#yum install zlib-devel -y [root@cml3 ~]#cd /usr/local/src/ [root@cml3src]# tar -xf moosefs.3.0.96.tar.gz [root@cml3moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfsmaster --disable-mfsmount [root@cml3moosefs-3.0.96]# make && make install
二、配置check server:
[root@cml3moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/ You have newmail in /var/spool/mail/root [root@cml3mfs]# mv mfschunkserver.cfg.sample mfschunkserver.cfg [root@cml3mfs]# vim mfschunkserver.cfg MASTER_HOST =192.168.5.200 ##指向vip
三、配置mfshdd.cfg主配置文件
mfshdd.cfg該文件用來設置你將 Chunk Server 的哪一個目錄共享出去給 Master Server進行管理。固然,雖然這裏填寫的是共享的目錄,可是這個目錄後面最好是一個單獨的分區。
[root@cml3mfs]# cp /usr/local/mfs/etc/mfs/mfshdd.cfg.sample /usr/local/mfs/etc/mfs/mfshdd.cfg [root@cml3mfs]# vim /usr/local/mfs/etc/mfs/mfshdd.cfg /mfsdata
##本身定義的目錄
四、啓動check Server:
[root@cml3mfs]# mkdir /mfsdata [root@cml3mfs]# chown mfs:mfs /mfsdata/ [root@cml3mfs]# /usr/local/mfs/sbin/mfschunkserver start open fileslimit has been set to: 16384 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked setting glibcmalloc arena max to 4 setting glibcmalloc arena test to 4 initializingmfschunkserver modules ... hdd spacemanager: path to scan: /mfsdata/ hdd spacemanager: start background hdd scanning (searching for available chunks) main servermodule: listen on *:9422 no charts datafile - initializing empty charts mfschunkserverdaemon initialized properly
###檢查監聽端口:
[root@cml3mfs]# netstat -lantp|grep 9420 tcp 0 0 192.168.113.145:45904 192.168.113.143:9420 ESTABLISHED 9896/mfschunkserver
###在master上面查看變化:
一、安裝FUSE:
[root@cml6mfs]# lsmod|grep fuse [root@cml6mfs]# yum install fuse fuse-devel [root@cml6 ~]# modprobe fuse [root@cml6 ~]# lsmod | grep fuse fuse 91874 0
二、安裝掛載客戶端
[root@cml6 ~]# yum install zlib-devel -y [root@cml6moosefs-3.0.96]# yum install fuse-devel [root@cml6 ~]# useradd mfs [root@cml6src]# tar -zxvf v3.0.96.tar.gz [root@cml6src]# cd moosefs-3.0.96/ [root@cml6moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver--enable-mfsmount [root@cml6moosefs-3.0.96]# make && make install
三、在客戶端上掛載文件系統,先建立掛載目錄:
[root@cml6moosefs-3.0.96]# mkdir /mfsdata [root@cml6moosefs-3.0.96]# chown -R mfs:mfs /mfsdata/ [root@cml6 ~]#/usr/local/mfs/bin/mfsmount -H 192.168.5.200 /mfsdata/ -p MFS Password: mfsmasteraccepted connection with parameters: read-write,restricted_ip ; root mapped toroot:root [root@cml6 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_cml-lv_root ext4 19G 4.9G 13G 28% / tmpfs tmpfs 977M 0 977M 0% /dev/shm /dev/sda1 ext4 500M 29M 445M 7% /boot 192.168.5.200:9421 fuse.mfs 38G 14G 25G 36% /mfsdata [root@cml6mfsdata]# echo "test" > a.txt [root@cml6mfsdata]# ls a.txt [root@cml6mfsdata]# cat a.txt test
測試master server(master)主機down掉切到(slave)上文件是否還在
crm(live)#node standby crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:55:15 2017 Last change:Fri Oct 27 19:55:01 2017 by root via crm_attribute on cml1 2 nodesconfigured 5 resourcesconfigured Node cml1:standby Online: [ cml2] Full list ofresources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ cml2 ] Stopped: [ cml1 ] mystore (ocf::heartbeat:Filesystem): Started cml2 mfs (systemd:mfsmaster): Started cml2 vip (ocf::heartbeat:IPaddr): Started cml2
##顯示業務已經切到cml2主機上了
[root@cml2 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-rootxfs 19G 6.7G 13G 36% / devtmpfs devtmpfs 501M 0 501M 0% /dev tmpfs tmpfs 512M 56M 456M 11% /dev/shm tmpfs tmpfs 512M 14M 499M 3% /run tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup /dev/sda1 xfs 521M 160M 362M 31% /boot tmpfs tmpfs 103M 0 103M 0% /run/user/0 /dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs [root@cml2 ~]#ip addr 2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000 link/ether 00:0c:29:5a:c5:ee brd ff:ff:ff:ff:ff:ff inet 192.168.5.102/24 brd 192.168.5.255scope global ens34 valid_lft forever preferred_lft forever inet192.168.5.200/24 brd 192.168.5.255 scope global secondary ens34
##掛載點和vip已經切到cml2上面了
##從新掛載看看業務是否正常
[root@cml6 ~]#umount /mfsdata/ [root@cml6 ~]#/usr/local/mfs/bin/mfsmount -H 192.168.5.200 /mfsdata/ -p MFS Password: mfsmasteraccepted connection with parameters: read-write,restricted_ip ; root mapped toroot:root [root@cml6 ~]#cd /mfsdata/ [root@cml6mfsdata]# ls a.txt [root@cml6mfsdata]# cat a.txt test
##剛剛寫進去的a.txt文件還在證實業務正常