ZooKeeper是一個分佈式的,開放源碼的分佈式應用程序協調服務,它包含一個簡單的原語集,分佈式應用程序能夠基於它實現同步服務,配置維護和命名服務等。Zookeeper是hadoop的一個子項目,在分佈式應用中,因爲工程師不能很好地使用鎖機制,以及基於消息的協調機制不適合在某些應用中使用,所以須要有一種可靠的、可擴展的、分佈式的、可配置的協調機制來統一系統的狀態
運行原理:Zookeeper的核心是原子廣播,這個機制保證了各個Server之間的同步。實現這個機制的協議叫作Zab協議。Zab協議有兩種模式,它們分別是恢復模式(選主)和廣播模式(同步)。當服務啓動或者在領導者崩潰後,Zab就進入了恢復模式,當領導者被選舉出來,且大多數Server完成了和leader的狀態同步之後,恢復模式就結束了。狀態同步保證了leader和Server具備相同的系統狀態。爲了保證事務的順序一致性,zookeeper採用了遞增的事務id號(zxid)來標識事務。全部的提議(proposal)都在被提出的時候加上了zxid。實現中zxid是一個64位的數字,它高32位是epoch用來標識leader關係是否改變,每次一個leader被選出來,它都會有一個新的epoch,標識當前屬於那個leader的統治時期。低32位用於遞增計數。
每一個Server在工做過程當中有三種狀態:
LOOKING:當前Server不知道leader是誰,正在搜尋
LEADING:當前Server即爲選舉出來的leader
FOLLOWING:leader已經選舉出來,當前Server與之同步
1、前期準備
環境須要6臺機器:hosts文件要保持一致
編輯/etc/hosts文件:
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4
::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
192.168.1.105 node105
192.168.1.106 node106
192.168.1.107 node107
192.168.1.108 node108
192.168.1.109 node109
192.168.1.110 node110
主機名定義爲:node105依次排列
開始安裝:
root身份登陸系統 將jdk-7u76-linux-x64.tar.gz拷貝到/opt下面
cd /opt
tar zxvf jdk-7u76-linux-x64.tar.gz
解壓後生成的文件名字叫作jdk1.7.0_76
vim /etc/profile
把以下代碼放到文件的最後面
export JAVA_HOME=/opt/jdk1.7.0_76
export JAVA_BIN=/opt/jdk1.7.0_76/bin
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JAVA_BIN PATH CLASSPATH
source/etc/profile 使變量生效
2、zookeeper的部署 (三個節點上部署)
192.168.1.105,192.168.1.106,192.168.1.107三個節點上要部署zookeeper
將zookeeper-3.4.6.tar.gz 上傳到/opt下面
cd /opt
tar -zxzf zookeeper-3.4.6.tar.gz
ln -s zookeeper-3.4.6 zookeeper
mkdir /opt/zookeeper/data
這是zk數據存儲目錄
cd /opt/zookeeper/data
vi myid
若是是node105則內容爲1,若是是node106則內容爲2,若是是node107則內容爲3html
mkdir /opt/zookeeper/logs
這是zk日誌存儲目錄
cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
--zookeeper啓動時會讀取zookeeper/conf/zoo.cfg文件內容,zookeeper/conf/ 下面默認是沒有zoo.cfg文件的, 所以咱們能夠根據zookeeper/conf/zoo_sample.cfg來生成zookeeper/conf/zoo.cfg
vi /opt/zookeeper/conf/zoo.cfg
添加以下內容
dataDir=/opt/zookeeper/data
dataLogDir=/opt/zookeeper/logs
server.1=node105:2888:3888
server.2=node106:2888:3888
server.3=node107:2888:3888
dataDir=/data/app/zookeeper/data
dataLogDir=/data/app/zookeeper/logs
各個節點啓服務:
/opt/zookeeper/bin/zkServer.sh start
/opt/zookeeper/bin/zkServer.sh status
node1
/opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
node2
/opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
node3
/opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
node
可見node3爲zookeepr leader
3、單集羣activeMQ的部署
規劃
192.168.1.105,192.168.1.106,192.168.1.107組成的是第一個集羣,假設名字叫作cluster001 zookeeper+mq
若是有第二個集羣
192.168.1.108,192.168.1.109,192.168.1.110組成的是第二個集羣,假設名字叫作cluster002 mq
本文檔只演示cluster001的部署,cluster002的部署相似於cluster001。
2.cluster001集羣的部署
1)分別在三臺主機中建立/opt/activemq/cluster001目錄
$ mkdir -p /opt/activemq/cluster001
上傳apache-activemq-5.11.1-bin.tar.gz到/opt/activemq/cluster001目錄
2)解壓並按節點命名
$ cd /opt/activemq/cluster001
$ tar -xvf apache-activemq-5.11.1-bin.tar.gz
$ mv apache-activemq-5.11.1 node-0X
(X表明節點號, 1表示node10五、2表示node10六、3表示node107,下同)
3)集羣配置:
在 3 個 ActiveMQ 節點中配置 conf/activemq.xml 中的持久化適配器。修改其中 bind、zkAddress、
hostname和zkPath。注意:每一個ActiveMQ的BrokerName必須相同,不然不能加入集羣。
vim /opt/activemq/cluster001/node-01/conf/activemq.xml
vim /opt/activemq/cluster001/node-02/conf/activemq.xml
vim /opt/activemq/cluster001/node-03/conf/activemq.xml
Node-01中的配置:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="cluster001" dataDirectory="${activemq.data}">
<persistenceAdapter>
<!-- kahaDB directory="${activemq.data}/kahadb"/ -->
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:62621"
zkAddress="192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181"
hostname="node105"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
</broker>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<!-- 綠色標記的是要添加的代碼 -->
<policyEntry queue=">" enableAudit="false">
<networkBridgeFilterFactory>
<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true"/>
</networkBridgeFilterFactory>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
Node-02中的配置:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="cluster001" dataDirectory="${activemq.data}">
<persistenceAdapter>
<!-- kahaDB directory="${activemq.data}/kahadb"/ -->
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:62621"
zkAddress="192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181"
hostname="node106"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
</broker>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<!-- 綠色標記的是要添加的代碼 -->
<policyEntry queue=">" enableAudit="false">
<networkBridgeFilterFactory>
<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true"/>
</networkBridgeFilterFactory>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
Node-03中的配置:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="cluster001" dataDirectory="${activemq.data}">
<persistenceAdapter>
<!-- kahaDB directory="${activemq.data}/kahadb"/ -->
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:62621"
zkAddress="192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181"
hostname="node107"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
</broker>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<!-- 綠色標記的是要添加的代碼 -->
<policyEntry queue=">" enableAudit="false">
<networkBridgeFilterFactory>
<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true"/>
</networkBridgeFilterFactory>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
4)按順序啓動3個ActiveMQ節點:
$nohup /opt/activemq/cluster001/node-01/bin/activemq start &
$nohup /opt/activemq/cluster001/node-02/bin/activemq start &
$nohup /opt/activemq/cluster001/node-03/bin/activemq start &
4、多集羣activeMQ的部署
這裏只演示雙集羣的部署,三集羣或n集羣的部署都是相似的。
修改全部主機的activemq.xml文件
在<persistenceAdapter> 前面添加:
<networkConnectors>
<networkConnector uri="multicast://default"/>
</networkConnectors>
在<transportConnector name="openwire"> 標籤的末尾添加:
discoveryUri="multicast://default"
linux
使用及驗證:apache
#jps
1522 QuorumPeerMain
2661 Jps
2139 activemq.jarvim
#ps -ef |grep ac 查看進程app
找到起端口:8161
dom
圖示:tcp
訪問:http://192.168.1.150:8161/admin/ 帳戶及密碼: admin admin分佈式