基於zookeeper的Swarm集羣搭建

簡介

Swarm:docker原生的集羣管理工具,將一組docker主機做爲一個虛擬的docker主機來管理。node

對客戶端而言,Swarm集羣就像是另外一臺普通的docker主機。linux

Swarm集羣中的每臺主機都運行着一個swarm節點代理,每一個代理將該主機上的相關Docker守護進程註冊到集羣中。和節點代理相對應的是Swarm管理者,用於對集羣進行管理。docker

運行Swarm的全部Docker節點必須運行着同一個版本的Docker。shell

基於zookeeper的Swarm集羣搭建

1. 拉取Swarm鏡像

docker pull swarm

2. 建立swarm集羣

有多種方式能夠建立swarm集羣,不一樣的方式本質是基於不一樣的集羣發現後端(discovery backend)。常見的有以下幾種方式:後端

a. 使用默認的Docker Hub:在Docker Hub注意一個集羣,而後返回專屬的token網絡

b. 使用etcd:將swarm agent信息註冊到etcdsocket

c. 使用靜態文件:將全部agent信息寫入manager節點上的某個文本文件中(不易擴展)tcp

d. 使用Consul:與etcd相似工具

e. 使用zookeeper:與etcd相似oop

f. 用戶自定義集羣建立方式:實現DiscoveryService接口

下面重點介紹下使用咱們最熟悉的zookeeper來搭建swarm集羣:

首先須要搭建一套zookeeper集羣,關於zookeeper集羣的搭建見以前的文章。而後經過zkClient建立一個zk節點/swarm-agent用於後續存儲swarm agent的信息。

使用該zookeeper集羣建立新的swarm集羣:

在一臺swarm agent機器上執行以下命令:

docker run --name swarm-agent --net zknetwork -d swarm join --addr=10.120.196.36:2375 zk://zkServer1:2181,zkServer2:2181,zkServer3:2181/swarm-agent

其中addr參數是告訴zookeeper本臺swarm agent的docker守護進程服務的ip和host。zk://<zk_addr>/<path>則爲zookeeper協調服務的數據節點。

此處,由於咱們agent所在機器與部署zookeeper集羣的容器屬於同一臺宿主機,因此直接將該swarm agent的容器掛載到zk網絡下,後面zk的地址直接寫了容器的名字。實際上,嘗試過用zk://10.120.196.36:2181,10.120.196.36:2182,10.120.196.36:2183/swarm-agent的方式來鏈接zookeeper,從日誌中看到沒法鏈接上zookeeper端口。而在下面另外一臺節點使用該<zk_addr>/<path>的時候,能夠鏈接上。

查看swarm agent容器中的log信息:

root@hadoop985:~# docker logs swarm-agent1  -f
time="2017-10-24T09:35:10Z" level=info msg="Initializing discovery without TLS"
time="2017-10-24T09:35:10Z" level=info msg="Registering on the discovery service every 1m0s..." addr="10.120.196.36:2375" discovery="zk://zkServer1:2181,zkServer2:2181,zkServer3:2181/swarm-agent"
2017/10/24 09:35:10 Connected to 192.168.16.3:2181
2017/10/24 09:35:10 Authenticated: id=170934675179241473, timeout=10000
2017/10/24 09:35:10 Re-submitting `0` credentials after reconnect
time="2017-10-24T09:36:10Z" level=info msg="Registering on the discovery service every 1m0s..." addr="10.120.196.36:2375" discovery="zk://zkServer1:2181,zkServer2:2181,zkServer3:2181/swarm-agent"
......

能夠看到,agent節點已經連上了zk集羣,且每隔一段時間agent會從新註冊到dicovery service(這裏寫的"every 1m0s"不知道是什麼意思,《第一本Docker書》和《Docker容器與容器雲》中介紹的都是默認25秒發送一次心跳信息。)

相似的,再在另外一臺swarm agent機器上執行命令(此處由於是不一樣的機器,需使用不一樣的<zk_addr>/<path>):

docker run --name swarm-agent -d swarm join --addr=10.120.196.37:2375 zk://10.120.196.36:2181,10.120.196.36:2182,10.120.196.36:2183/swarm-agent

查看容器的日誌肯定其鏈接成功:

root@hadoop986:~/docker# docker logs swarm-agent -f
time="2017-10-24T09:50:03Z" level=info msg="Initializing discovery without TLS"
time="2017-10-24T09:50:03Z" level=info msg="Registering on the discovery service every 1m0s..." addr="10.120.196.37:2375" discovery="zk://10.120.196.36:2181,10.120.196.36:2182,10.120.196.36:2183/swarm-agent"
2017/10/24 09:50:03 Connected to 10.120.196.36:2181
2017/10/24 09:50:03 Authenticated: id=98877081139347460, timeout=10000
2017/10/24 09:50:03 Re-submitting `0` credentials after reconnect
......

咱們再看下zookeeper中的節點信息:

WatchedEvent state:SyncConnected type:None path:null
ls /
[zookeeper, swarm-agent]
[zk: localhost:2181(CONNECTED) 1] ls /swarm-agent
[docker]
[zk: localhost:2181(CONNECTED) 2] ls /swarm-agent/docker
[swarm]
[zk: localhost:2181(CONNECTED) 3] ls /swarm-agent/docker/swarm
[nodes]
[zk: localhost:2181(CONNECTED) 4] ls /swarm-agent/docker/swarm/nodes
[10.120.196.36:2375, 10.120.196.37:2375]

能夠看到兩臺節點都成功註冊到了zookeeper。

還能夠經過swarm list的方式來查看節點信息:

root@hadoop986:~/docker# docker run --rm swarm list zk://10.120.196.36:2181,10.120.196.36:2182,10.120.196.36:2183/swarm-agent
time="2017-10-24T09:55:40Z" level=info msg="Initializing discovery without TLS"
2017/10/24 09:55:40 Connected to 10.120.196.36:2183
2017/10/24 09:55:40 Authenticated: id=242992269245284353, timeout=10000
2017/10/24 09:55:40 Re-submitting `0` credentials after reconnect
10.120.196.36:2375
10.120.196.37:2375

其中,--rm參數是讓容器退出後自動清理容器內部的文件系統(--rm選項不能與-d同時使用,即只能自動清理foreground容器,不能自動清理detached容器)。

3. 建立swarm manager

在hadoop986機器上執行命令:

docker run --name swarm-manage -d -p 2380:2375 swarm manage zk://10.120.196.36:2181,10.120.196.36:2182,10.120.196.36:2183/swarm-agent

而後能夠經過docker info查看swarm集羣的信息:

root@hadoop986:~/docker# docker -H tcp://localhost:2380 info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: swarm/1.2.8
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint, whitelist
Nodes: 2
 (unknown): 10.120.196.36:2375
  └ ID:
  └ Status: Pending
  └ Containers: 0
  └ Reserved CPUs: 0 / 0
  └ Reserved Memory: 0 B / 0 B
  └ Labels:
  └ Error: Cannot connect to the Docker daemon at tcp://10.120.196.36:2375. Is the docker daemon running?
  └ UpdatedAt: 2017-10-24T10:42:31Z
  └ ServerVersion:
 (unknown): 10.120.196.37:2375
  └ ID:
  └ Status: Pending
  └ Containers: 0
  └ Reserved CPUs: 0 / 0
  └ Reserved Memory: 0 B / 0 B
  └ Labels:
  └ Error: Cannot connect to the Docker daemon at tcp://10.120.196.37:2375. Is the docker daemon runnindog?
  └ UpdatedAt: 2017-10-24T10:42:31Z
  └ ServerVersion:
Plugins:
 Volume:
 Network:
Swarm:
 NodeID:
 Is Manager: false
 Node Address:
Kernel Version: 3.16.0-4-amd64
Operating System: linux
Architecture: amd64
CPUs: 0
Total Memory: 0 B
Name: d66603b3a60a
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support
Experimental: false
Live Restore Enabled: false

能夠看到,當前兩個node都處於Pending未鏈接狀態:

Error: Cannot connect to the Docker daemon at tcp://10.120.196.37:2375. Is the docker daemon runnindog?

這是由於咱們的agent機器上的docker守護進程,默認不支持tcp socket訪問。

須要修改/etc/default/docker文件開啓tcp socket鏈接的功能:

DOCKER_OPTS="$DOCKER_OPTS -H 0.0.0.0:2375 -H unix:///var/run/docker.sock"

默認的配置在/lib/systemd/system下:

root@hadoop985:/lib/systemd/system# ls -l | grep docker
-rw-r--r-- 1 root root 1037 Jan 17  2017 docker.service
-rw-r--r-- 1 root root  197 Jan 17  2017 docker.socket

修改完/etc/default/docker後,須要重啓docker守護進程:

/etc/init.d/docker restart

注意:對於debian8的系統,存在一個bug,導致通過上述操做後,2375端口仍沒有起來,是由於/etc/default/docker中的DOCKER_OPTS配置並無生效。解決的辦法能夠參考:http://blog.csdn.net/jcjc918/article/details/46564891

通過上述修改後,從新查看docker info信息,能夠發現節點成功鏈接:

root@hadoop986:~#  docker -H tcp://localhost:2380 info
Containers: 10
 Running: 10
 Paused: 0
 Stopped: 0
Images: 5
Server Version: swarm/1.2.8
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint, whitelist
Nodes: 2
 hadoop985: 10.120.196.36:2375
  └ ID: VP32:TR4C:WQPL:6DRY:75BB:4Q7O:WRHQ:X5RL:Y2GG:VCMO:6KVV:5DU5|10.120.196.36:2375
  └ Status: Healthy
  └ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 33.02 GiB
  └ Labels: kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), ostype=linux, storagedriver=aufs
  └ UpdatedAt: 2017-10-25T01:41:30Z
  └ ServerVersion: 1.13.0
 hadoop986: 10.120.196.37:2375
  └ ID: VLY4:UIDQ:DLZC:XYLG:4FMY:5LET:OXPV:7R6O:3JDI:VR3G:MQJU:6VRZ|10.120.196.37:2375
  └ Status: Healthy
  └ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 33.02 GiB
  └ Labels: kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), ostype=linux, storagedriver=aufs
  └ UpdatedAt: 2017-10-25T01:41:54Z
  └ ServerVersion: 1.13.0
Plugins:
 Volume:
 Network:
Swarm:
 NodeID:
 Is Manager: false
 Node Address:
Kernel Version: 3.16.0-4-amd64
Operating System: linux
Architecture: amd64
CPUs: 8
Total Memory: 66.05 GiB
Name: d66603b3a60a
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support
Experimental: false
Live Restore Enabled: false

4. 在swarm集羣中建立容器

後續對swarm集羣的使用,徹底能夠將整個swarm集羣看成一臺普通的docker主機來對待。經過

docker -H tcp://localhost:2380 run -d --name zkserver zookeeper

就能夠在集羣中建立一個運行zookeeper鏡像的容器,默認地,swarm會按照「平鋪策略」來讓全部容器比較平均地分配到每一個節點。

相應地,查看集羣中運行着的容器:

docker -H tcp://localhost:2380 ps
相關文章
相關標籤/搜索