從dockerhub獲取官方的zookeeper鏡像:java
docker pull zookeeper
拉取完鏡像後,經過docker
docker inspect zookeeper
咱們能夠查看到關於該鏡像的一些基本信息:shell
...... "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin:/zookeeper-3.4.10/bin", "LANG=C.UTF-8", "JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre", "JAVA_VERSION=8u131", "JAVA_ALPINE_VERSION=8.131.11-r2", "ZOO_USER=zookeeper", "ZOO_CONF_DIR=/conf", "ZOO_DATA_DIR=/data", "ZOO_DATA_LOG_DIR=/datalog", "ZOO_PORT=2181", "ZOO_TICK_TIME=2000", "ZOO_INIT_LIMIT=5", "ZOO_SYNC_LIMIT=2", "ZOO_MAX_CLIENT_CNXNS=60", "ZOOCFGDIR=/conf" ], "Cmd": [ "zkServer.sh", "start-foreground" ], "Volumes": { "/data": {}, "/datalog": {} }, "WorkingDir": "/zookeeper-3.4.10", "Entrypoint": [ "/docker-entrypoint.sh" ], ......
即,該zookeeper的版本是3.4.10,conf目錄在/conf,基於該鏡像啓動的容器的entrypoint爲bash
/docker-entrypoint.sh
默認傳入的參數爲網絡
zkServer.sh start-foreground
基於該鏡像啓動一個容器:jvm
docker run -d zookeeper
而後進入容器後,咱們能夠查看docker-entrypoint.sh的基本內容:tcp
#!/bin/bash set -e # Allow the container to be started with `--user` if [ "$1" = 'zkServer.sh' -a "$(id -u)" = '0' ]; then chown -R "$ZOO_USER" "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" exec su-exec "$ZOO_USER" "$0" "$@" fi # Generate the config only if it doesn't exist if [ ! -f "$ZOO_CONF_DIR/zoo.cfg" ]; then CONFIG="$ZOO_CONF_DIR/zoo.cfg" echo "clientPort=$ZOO_PORT" >> "$CONFIG" echo "dataDir=$ZOO_DATA_DIR" >> "$CONFIG" echo "dataLogDir=$ZOO_DATA_LOG_DIR" >> "$CONFIG" echo "tickTime=$ZOO_TICK_TIME" >> "$CONFIG" echo "initLimit=$ZOO_INIT_LIMIT" >> "$CONFIG" echo "syncLimit=$ZOO_SYNC_LIMIT" >> "$CONFIG" echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS" >> "$CONFIG" for server in $ZOO_SERVERS; do echo "$server" >> "$CONFIG" done fi # Write myid only if it doesn't exist if [ ! -f "$ZOO_DATA_DIR/myid" ]; then echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid" fi exec "$@"
能夠看到,該腳本主要是用於設置啓動zookeeper的用戶,以及建立zoo.cfg配置文件。oop
其中有一段是:code
for server in $ZOO_SERVERS; do echo "$server" >> "$CONFIG" done
# Write myid only if it doesn't exist if [ ! -f "$ZOO_DATA_DIR/myid" ]; then echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid" fi
即只要傳入ZOO_SERVERS參數,咱們就能夠設置各個zookeeper server節點的host信息,設置ZOO_MY_ID參數就能夠寫入本節點的serverID。server
這一段腳本的做用使得該鏡像能夠很好地適配zk的不一樣模式:單機模式和集羣模式(僞集羣模式)。
基於上述調研,咱們能夠明確搭建一個多節點的zookeeper集羣須要的步驟以下:
(1) 獲取zookeeper官方鏡像
(2) 連通不一樣zk server節點所在容器之間的網絡
(3) 對每一個zk server節點設置相同的zoo.cfg,傳入集羣中各節點的host信息
(4) 設置各個zk server節點的ServerID(修改各自的$dataDir/myid文件)
(5) 啓動集羣, 並將端口2181映射到宿主機。
基於上述步驟,整合成以下的shell腳本:
#!/bin/bash #Get zookeeper image zkimage=`docker images | grep zookeeper | awk {'print $1'}` if [ -n "$zkimage" ] then echo 'The zookeeper image is already existed.' else echo 'Pull the latest zookeeper image.' docker pull zookeeper fi #Create network for zookeeper containers zknet=`docker network ls | grep zknetwork | awk {'print $2'}` if [ -n "$zknet" ] then echo 'The zknetwork is already existed.' else echo 'Create zknetwork.' docker network create zknetwork fi #Start zookeeper cluster echo 'Start 3 zookeeper servers.' ZOO_SERVERS="server.1=zkServer1:2888:3888 server.2=zkServer2:2888:3888 server.3=zkServer3:2888:3888" docker run -d -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=1 --name zkServer1 --net zknetwork -p 2181:2181 zookeeper docker run -d -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=2 --name zkServer2 --net zknetwork -p 2182:2181 zookeeper docker run -d -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=3 --name zkServer3 --net zknetwork -p 2183:2181 zookeeper
執行該shell腳本,即可啓動一個新的zk集羣。
root@hadoop985:~/docker/zookeeper-docker# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bfc84ce7aa1d zookeeper "/docker-entrypoin..." 36 minutes ago Up 36 minutes 2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp zkServer3 18b6b1d9987c zookeeper "/docker-entrypoin..." 36 minutes ago Up 36 minutes 2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp zkServer2 0b6d1b69bb05 zookeeper "/docker-entrypoin..." 36 minutes ago Up 36 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zkServer1