SwarmKit測試
官方網站:
環境:
go-1.7
docker-1.12.0
CentOS 7.2
Manager1:192.168.8.201
Worker1:192.168.8.101
Worker2:192.168.8.102
Worker3:192.168.8.103
上圖顯示swarmkit在docker公司的研發投入己排到第3位,docker官方對其的重視可見一斑。
體驗swarmkit最好的方式是從docker-1.12.0開始的swarm模式,swarmkit直接內嵌在1.12.0以後的docker版本中,有興趣的朋友請參看
Docker-1.12 swarm模式,swarmd,swarmctl主要是開發調試之用
玩過k8s的同窗確定深有體會,swarmkit在編排上借鑑了不少k8s的概念,從kubectl過渡到swarmctl很是輕鬆,而且部署難度遠低於k8s,雖然說目前功能完備性上還不及k8s,畢竟才誕生不久,相信通過docker官方和社區不但改進和加強,前途必定會很是看好,等到swarmkit放出企業級API後,相信會有很多技術公司轉投swarmkit
docker
提示: 理論上docker只需安裝在Worker節點上,Manager節點做爲管理節點能夠不用安裝,但爲了方便swarmctl node demote|promode轉換Worker和Manager,建議仍是裝上
swarmkit
一.配置go環境
mkdir /var/tmp/go
sudo cat >>/etc/profile <<'HERE'
export GOROOT=/opt/go
export GOPATH=/var/tmp/go
export PATH=$GOROOT/bin:$PATH
HERE
source /etc/profile
提示:默認安裝在/opt/go下,主要設置GOROOT(安裝路徑),GOPATH(go項目的存放位置,自定義)
root@router:swarmkit#go versionhtml
go version go1.7 linux/amd64node
二.安裝swarmkit
A.二進制包
B.編譯
自動構建
go get github.com/docker/swarmkit
cd
$GOPATH/src/github.com/docker/swarmkit
make binaries
或者手動構建
https://golang.org/doc/code.html
mkdir -p $GOPATH/src/github.com/docker
git clone https://github.com/docker/swarmkit.git
mv swarmkit $GOPATH/src/github.com/docker
cd
$GOPATH/src/github.com/docker/swarmkit
make binaries
root@router:swarmkit#pwdlinux
/var/tmp/go/src/github.com/docker/swarmkitgit
root@router:swarmkit#lsgithub
agent/ ca/ cmd/ design/ identity/ log/ manager/ remotes/golang
api/ circle.yml codecov.yml doc.go ioutils/ MAINTAINERS protobuf/ vendor/redis
BUILDING.md cli/ CONTRIBUTING.md Godeps/ LICENSE Makefile README.md version/docker
root@router:swarmkit#make binariesapi
🐳 bin/swarmdcurl
🐳 bin/swarmctl
🐳 bin/swarm-bench
🐳 bin/protoc-gen-gogoswarm
🐳 binaries
編譯完成後,會在bin目錄生成swarmd,swarmctl等二進制文件
root@router:swarmkit#ls bin/
protoc-gen-gogoswarm* swarm-bench* swarmctl* swarmd*
root@router:swarmkit#cp -a bin/* /usr/local/bin/
root@router:swarmkit#swarmd -v
swarmd github.com/docker/swarmkit v1.12.0-381-g3be4c3f
能夠將這些文件同步到對應節點的PATH路徑下,我這裏放在節點的/usr/local/bin下
三.配置swarmkit集羣
Manager
Manager1:192.168.8.201
swarmd -d /tmp/${HOSTNAME} --listen-control-api /tmp/${HOSTNAME}/swarm.sock --hostname ${HOSTNAME}
確認Token,新加入的節點做爲Worker則採用Worker Token,做爲Manager則採用Manager Token,實際測試中,多臺Manager節點加入swarm失敗,無論是直接加入仍是從Worker提高爲Manager,狀態都變爲UNKOWN狀態,有待進一步測試
[root@node4 ~]# netstat -tunlp|grep swarmd
tcp6 0 0 :::4242 :::* LISTEN 2617/swarmd
[root@node4 ~]# export SWARM_SOCKET=/tmp/${HOSTNAME}/swarm.sock
[root@node4 ~]# swarmctl cluster inspect default
ID : 7xq6gmnrupvulamznbikk2vu2
Name : default
Orchestration settings:
Task history entries: 5
Dispatcher settings:
Dispatcher heartbeat period: 5s
Certificate Authority settings:
Certificate Validity Duration: 2160h0m0s
Join Tokens:
Worker: SWMTKN-1-2p2zwgpu4v6qxhqugwevbbctaj3ody14cla5pufrggs4fne7wt-0tyu8kedqevjol14z0vl9mjp5
Manager: SWMTKN-1-2p2zwgpu4v6qxhqugwevbbctaj3ody14cla5pufrggs4fne7wt-650ehhda8yzpdb1x3duw6fxia
Worker
Worker1:192.168.8.101
Worker2:192.168.8.102
Worker3:192.168.8.103
swarmd -d /tmp/${HOSTNAME} --hostname ${HOSTNAME} --join-addr 192.168.8.254:4242 --join-token SWMTKN-1-31hzks4sz09wkqher45qg4zugxfjgenwa4xg2g9kcr59eflgui-4ai7b8o6dlybycd9ke8cx930a
Worker節點正常啓動後,能夠看到各節點狀態
[root@node4 ~]# swarmctl node ls
ID Name Membership Status Availability Manager Status
-- ---- ---------- ------ ------------ --------------
03h6uy6tv2mugqq4imwx7jdrw node1.example.com ACCEPTED READY ACTIVE
3xzh1g4pu6fge3r4v4e5d7x8k node4.example.com ACCEPTED READY ACTIVE REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t node3.example.com ACCEPTED READY ACTIVE
c8ff0a68y545th5zq3bdke70e node2.example.com ACCEPTED READY ACTIVE
四.管理swarm
1.建立service
[root@node4 ~]# swarmctl service create --name redis --image 192.168.8.254:5000/redis
4aaw6z00fp1e31z2ju7bp0r94
[root@node4 ~]# swarmctl service ls
ID Name Image Replicas
-- ---- ----- --------
4aaw6z00fp1e31z2ju7bp0r94 redis 192.168.8.254:5000/redis 1/1
2.更新(增長或減小)service
[root@node4 ~]# swarmctl service update redis --replicas=3
4aaw6z00fp1e31z2ju7bp0r94
[root@node4 ~]# swarmctl service inspect redis
ID : 4aaw6z00fp1e31z2ju7bp0r94
Name : redis
Replicas : 3/3
Template
Container
Image : 192.168.8.254:5000/redis
Task ID Service Slot Image Desired State Last State Node
------- ------- ---- ----- ------------- ---------- ----
81z5ajocn0icz0qmypt192ohj redis 1 192.168.8.254:5000/redis RUNNING RUNNING 1 minute ago node4.example.com
8l2q9mqc4zu0dzgiv0vga0567 redis 2 192.168.8.254:5000/redis RUNNING RUNNING 3 seconds ago node2.example.com
duvetwbzsw9u9cjsdcz6h23xv redis 3 192.168.8.254:5000/redis RUNNING RUNNING 3 seconds ago node1.example.com
3.零downtime節點維護
[root@node4 ~]# swarmctl task ls
ID Service Desired State Last State Node
-- ------- ------------- ---------- ----
81z5ajocn0icz0qmypt192ohj redis.1 RUNNING RUNNING 2 minutes ago node4.example.com
8l2q9mqc4zu0dzgiv0vga0567 redis.2 RUNNING RUNNING 1 minute ago node2.example.com
duvetwbzsw9u9cjsdcz6h23xv redis.3 RUNNING RUNNING 1 minute ago node1.example.com
[root@node4 ~]# swarmctl node pause node1.example.com
[root@node4 ~]# swarmctl node drain node1.example.com
pause #將該節點標識爲不接收新task,也就是說,若是有新的容器要運行的時候不會分配給pause狀態的節點
drain #不只再也不接收新task,還將該節點上的容器在線遷移到其它可用的節點上
[root@node4 ~]# swarmctl node ls
ID Name Membership Status Availability Manager Status
-- ---- ---------- ------ ------------ --------------
03h6uy6tv2mugqq4imwx7jdrw node1.example.com ACCEPTED READY DRAIN
3xzh1g4pu6fge3r4v4e5d7x8k node4.example.com ACCEPTED READY ACTIVE REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t node3.example.com ACCEPTED READY ACTIVE
c8ff0a68y545th5zq3bdke70e node2.example.com ACCEPTED READY ACTIVE
[root@node4 ~]# swarmctl task ls
ID Service Desired State Last State Node
-- ------- ------------- ---------- ----
81z5ajocn0icz0qmypt192ohj redis.1 RUNNING RUNNING 4 minutes ago node4.example.com
8l2q9mqc4zu0dzgiv0vga0567 redis.2 RUNNING RUNNING 2 minutes ago node2.example.com
cc3577waq1n9z4hglyn1k55by redis.3 RUNNING RUNNING 54 seconds ago node3.example.com
能夠看到node1上的容器所有在線遷移到了其它swarm節點上,node activate能夠從新將drain狀態的節點恢復爲集羣可用節點
[root@node4 ~]# swarmctl node activate node1.example.com
[root@node4 ~]# swarmctl node ls
ID Name Membership Status Availability Manager Status
-- ---- ---------- ------ ------------ --------------
03h6uy6tv2mugqq4imwx7jdrw node1.example.com ACCEPTED READY ACTIVE
3xzh1g4pu6fge3r4v4e5d7x8k node4.example.com ACCEPTED READY ACTIVE REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t node3.example.com ACCEPTED READY ACTIVE
c8ff0a68y545th5zq3bdke70e node2.example.com ACCEPTED READY ACTIVE