ZooKeeper安裝模式主要有3種:html
ZooKeeper官網下載地址:http://zookeeper.apache.org/releases.html#downloadnode
如圖所示進行操做:
shell
注意一點,若是不想當小白鼠,請務必下穩定版(stable release),非穩定版安裝時可能出各類未知的異常。
數據庫
以3.4.14
版本爲例,在Centos系統
下進行安裝,以前寫一些軟件的安裝教程時,有人留言說但願把安裝的步驟儘可能詳細化,包括安裝路徑也要帶上,作到能夠照着教程複製操做。這個要求有點,呵呵,知足你!apache
輸入以下命令:服務器
wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
以下圖:
session
tar -zxvf apache-zookeeper-3.4.14.tar.gz
解壓完成後,將解壓包移動到/usr目錄:架構
mv apache-zookeeper-3.4.14 /usr/
並將apache-zookeeper-3.4.14重命名爲zookeeper-3.4.14。less
至此能夠看到ZooKeeper的目錄結構以下:maven
[root@instance-e5cf5719 zookeeper-3.4.14]# ls bin data ivy.xml logs README.md zookeeper-3.4.14.jar zookeeper-3.4.14.jar.sha1 zookeeper-docs zookeeper-recipes build.xml dist-maven lib NOTICE.txt README_packaging.txt zookeeper-3.4.14.jar.asc zookeeper-client zookeeper-it zookeeper-server conf ivysettings.xml LICENSE.txt pom.xml src zookeeper-3.4.14.jar.md5 zookeeper-contrib zookeeper-jute
進入/usr/zookeeper-3.4.14/conf目錄,能夠看到zoo_sample.cfg,這是樣例配置文件,須要修改成本身的,通常命令爲zoo.cfg 。
cp zoo_sample.cfg zoo.cfg
能夠看看zoo.cfg文件裏的內容:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
看着好複雜的感受,其實去掉註釋後,就只有幾行而已:
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/tmp/zookeeper clientPort=2181
tickTime=2000 :通俗點叫滴答時間
,就是心跳間隔,默認是2000毫秒,即每隔兩秒心跳一次。
心跳的做用:
舒適提示
:你們必定要學會看官方文檔,去接收第一手資料。雖然是英文,但用詞和語法都比較簡單,很容易看懂。
官網介紹以下:
- tickTime : the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.
- dataDir : the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.
- clientPort : the port to listen for client connections
在zookeeper-3.4.14目錄下建立data和logs文件,以下:
[root@instance-e5cf5719 zookeeper-3.4.14]# mkdir data [root@instance-e5cf5719 zookeeper-3.4.14]# mkdir logs
官方文檔也有對此進行說明,指出在生產環境中ZooKeeper是會長期運行的,ZooKeeper的存儲就須要專門的文件位置進行存儲dataDir和logs。
data文件夾用於存放內存數據庫快照,集羣的myid文件也是存放在這個文件夾下。
For long running production systems ZooKeeper storage must be managed externally (dataDir and logs).
修改後的zoo.cfg以下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. # dataDir=/tmp/zookeeper # 數據文件夾 dataDir=/usr/zookeeper-3.4.14/data # 日誌文件夾 dataLogDir=/usr/zookeeper-3.4.14/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
進入ZooKeeper的bin目錄:
[root@instance-e5cf5719 zookeeper-3.4.14]# cd bin/ [root@instance-e5cf5719 bin]# ls README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zkTxnLogToolkit.cmd zkTxnLogToolkit.sh zookeeper.out
啓動ZooKeeper:
./zkServer.sh start
成功啓動以下圖所示:
能夠查看ZooKeeper的狀態:
./zkServer.sh status
狀態信息以下圖所示:
能夠經過help
看看./zkServer.sh下的命令
[root@instance-e5cf5719 bin]# ./zkServer.sh help ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
進行鏈接:
./zkCli.sh -server 127.0.0.1:2181
即
./zkCli.sh -server <ip>:<port>
結果以下:
能夠經過help獲取更多的相關命令:
[zk: 127.0.0.1:2181(CONNECTED) 0] help ZooKeeper -server host:port cmd args stat path [watch] set path data [version] ls path [watch] delquota [-n|-b] path ls2 path [watch] setAcl path acl setquota -n|-b val path history redo cmdno printwatches on|off delete path [version] sync path listquota path rmr path get path [watch] create [-s] [-e] path data acl addauth scheme auth quit getAcl path close connect host:port
命令 | 描述 |
---|---|
help | 顯示全部操做命令 |
stat | 查看節點狀態,即判斷節點是否存在 |
set | 更新節點數據 |
get | 獲取節點數據 |
ls path [watch] | 使用 ls 命令來查看當前znode的內容 |
create | 普通建立 ; -s 含有序列;-e 臨時(重啓或者超時消失) |
delete | 刪除節點 |
rmr | 遞歸刪除節點 |
能夠對相關的命令進行一些簡單的測試,先建立一個新znode(運行create
/zk_test my_data ),裏面附帶的信息爲「my_data」.
[zk: 127.0.0.1:2181(CONNECTED) 1] create /zk_test my_data Created /zk_test [zk: 127.0.0.1:2181(CONNECTED) 2] ls / [zookeeper, zk_test]
能夠看到zk_test建立成功了。能夠經過get
命令看看zk_test節點裏的信息:
[zk: 127.0.0.1:2181(CONNECTED) 3] get /zk_test my_data cZxid = 0x7 ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x7 mtime = Thu Dec 05 16:32:20 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 7 numChildren = 0
經過set
能夠修改zk_test裏的信息。
[zk: 127.0.0.1:2181(CONNECTED) 4] set /zk_test junk cZxid = 0x7 ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x8 mtime = Thu Dec 05 16:37:03 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 4 numChildren = 0 [zk: 127.0.0.1:2181(CONNECTED) 5] get /zk_test junk cZxid = 0x7 ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x8 mtime = Thu Dec 05 16:37:03 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 4 numChildren = 0
經過delete
能夠刪除節點。
[zk: 127.0.0.1:2181(CONNECTED) 6] delete /zk_test [zk: 127.0.0.1:2181(CONNECTED) 7] ls / [zookeeper]
咱們搭建3個ZooKeeper來構建僞集羣。上面咱們已經搭建了zookeeper-3.4.14,如今將它複製兩份,命名爲zookeeper-3.4.14-1,zookeeper-3.4.14-2。
[root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-1 [root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-2
此時3個ZooKeeper文件是如出一轍的,構建僞集羣須要對每一個ZooKeeper的配置文件作一點小修改。
對3個ZooKeeper中/conf/zoo.cfg進行修改,主要是修改3個位置:端口號
、日誌路徑
、集羣配置
。
在zoo.cfg配置中,添加了一組server配置,表示ZooKeeper集羣中有3個節點,server的配置格式以下:
server.<myid>=<IP>:<Port1>:<Port2>
myid
:是節點的編號,該編號的取值範圍是1-255之間的整數,且在集羣中必須惟一。IP
:表示節點所在的IP地址,如在本地環境爲127.0.0.1或localhost。Port1
:leader節點與follower節點進行心跳檢測與數據同步時所使用的端口。Port2
:在進行leader選舉的過程當中,用於投票通訊的端口。若是是僞集羣的配置方式,因爲 ip 都是同樣,因此不一樣的 Zookeeper 實例通訊端口號不能同樣,要給它們分配不一樣的端口號。
在每一個ZooKeeper文件的/data
目錄下分別建立一個myid
文件,myid文件裏只需有服務器編號(如1,2, 3)。
分別啓動三個ZooKeeper服務(開啓3個窗口來啓動服務)。
結果以下:
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg Mode: leader
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg Mode: follower
經過查看狀態能夠看到zookeeper-3.4.14-1是leader
,zookeeper-3.4.14和zookeeper-3.4.14-2是follower
。
能夠參考官網的架構圖來輔助理解。
將zookeeper-3.4.14-1停掉,來觀察下leader的從新選舉。
[root@instance-e5cf5719 bin]# ./zkServer.sh stop ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-1/bin/../conf/zoo.cfg Stopping zookeeper ... STOPPED
分別查看zookeeper-3.4.14和zookeeper-3.4.14-2的狀態。
[root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
[root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/zookeeper-3.4.14-2/bin/../conf/zoo.cfg Mode: leader
能夠看到zookeeper-3.4.14-2成爲了leader。
集羣模式搭建跟僞集羣很是類似,只是集羣的ZooKeeper是部署在不一樣的機器,僞集羣的ZooKeeper是部署在同一臺機器,在對/conf/zoo.cfg進行修改時,由於是不一樣的機器(ip不一樣),能夠不用修改端口號。除了這一點差異外,其它的搭建方式跟僞集羣如出一轍,就不作多介紹了。
至此咱們完成ZooKeeper單機版、僞集羣和集羣環境的搭建。在生產環境上爲了確保ZooKeeper的高可用,務必要搭建集羣環境。