一、環境準備
本文中的案例會有四臺機器,他們的Host和IP地址以下
c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0.33 c4 -> 10.0.0.34
四臺機器的host以
c1爲例:
[root@c1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.31 c1 10.0.0.32 c2 10.0.0.33 c3 10.0.0.34 c4
1.一、Centos 7 64位安裝
以
c1安裝爲示例,安裝過程當中使用英文版本,而後點擊continue
點擊
LOCALIZATION下面的Data & Time,而後選擇Asia/shanghai時區,點擊Done.
點擊
SYSTEM下面的INSTALLATION DESTINATION,選擇你的硬盤後,在下面的單選框中,選擇I will configure partitioning點擊Done,咱們來自定義硬盤和分區
點擊
Click here to create them automatically,系統會自動幫咱們建立出推薦的分區格式。
咱們將/home的掛載點刪除掉,統一加到點
/,文件類型是xfs,使用所有的硬盤空間,點擊Update Settings,確保後面軟件有足夠的安裝空間。 最後點擊左上角的Done按鈕
xfs是在Centos7.0開始提供的,原來的ext4雖然穩定,但最多隻能有大概40多億文件,單個文件大小最大隻能支持到16T(4K block size) 的話。而XFS使用64位管理空間,文件系統規模能夠達到EB級別。
用於正式生產的服務器,切記必須把數據盤單獨分區,防止系統出問題時,保證數據的完整性。好比能夠再劃分一個,/data專門用來存放數據。
在彈出的窗口中點擊
Accept Changes
點擊下圖中的位置,設置機器的
Host Name,這裏咱們安裝機器的Host Name爲c1
最後點擊右下角的
Begin Installation,過程當中能夠設置root的密碼,也能夠建立其餘用戶
1.二、網絡配置
如下以
c1爲例
[root@c1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet BOOTPROTO=static #啓用靜態IP地址 DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eth0 UUID=e57c6a58-1951-4cfa-b3d1-cf25c4cdebdd DEVICE=eth0 ONBOOT=yes #開啓自動啓用網絡鏈接 IPADDR0=192.168.0.31 #設置IP地址 PREFIXO0=24 #設置子網掩碼 GATEWAY0=192.168.0.1 #設置網關 DNS1=192.168.0.1 #設置DNS DNS2=8.8.8.8
重啓網絡:
[root@c1 ~]# service network restart
更改源爲阿里雲
[root@c1 ~]# yum install -y wget [root@c1 ~]# cd /etc/yum.repos.d/ [root@c1 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak [root@c1 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo [root@c1 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo [root@c1 yum.repos.d]# yum clean all [root@c1 yum.repos.d]# yum makecache
安裝網絡工具包和基礎工具包
[root@c1 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y
1.三、更改hostname
在四臺機器上依次設置hostname,如下以
c1爲例
[root@localhost ~]# hostnamectl --static set-hostname c1 [root@localhost ~]# hostnamectl status Static hostname: c1 Icon name: computer-vm Chassis: vm Machine ID: e4ac9d1a9e9b4af1bb67264b83da59e4 Boot ID: a128517ed6cb41d083da61de5951a109 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-327.36.3.el7.x86_64 Architecture: x86-64
1.四、配置ssh免密碼登陸登陸
前後在四臺機器分別執行,以
c1爲例
[root@c1 ~]# ssh-keygen #一路按回車到最後
在免登陸端修改配置文件
[root@c1 ~]# vi /etc/ssh/sshd_config
#找到如下內容,並去掉註釋符# RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys
將
ssh-keygen生成的密鑰,分別複製到其餘三臺機器,如下以c1爲例
[root@c1 ~]# ssh-copy-id c1 The authenticity of host 'c1 (10.0.0.31)' can't be established. ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@c1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'c1'" and check to make sure that only the key(s) you wanted were added. [root@c1 ~]# ssh-copy-id c2 The authenticity of host 'c2 (10.0.0.32)' can't be established. ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@c2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'c2'" and check to make sure that only the key(s) you wanted were added. [root@c1 ~]# ssh-copy-id c3 The authenticity of host 'c3 (10.0.0.33)' can't be established. ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@c3's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'c3'" and check to make sure that only the key(s) you wanted were added. [root@c1 ~]# ssh-copy-id c4 The authenticity of host 'c4 (10.0.0.34)' can't be established. ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@c4's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'c4'" and check to make sure that only the key(s) you wanted were added.
測試密鑰是否配置成功
[root@c1 ~]# for N in $(seq 1 4); do ssh c$N hostname; done; c1 c2 c3 c4
安裝ntp時間同步工具和git
[root@c1 ~]# for N in $(seq 1 4); do ssh c$N yum install ntp git -y; done;
二、安裝Docker1.12.3和初步配置
2.一、安裝Docker1.12.3
不建議直接使用Docker官方的
docker yum源進行安裝,由於會依據系統版本去選擇Docker版本,不能指定相應的版本進行選擇安裝。在四臺機器上依次執行下面的命令,能夠將下面的命令,直接複製粘貼到命令行中
mkdir -p ~/_src \ && cd ~/_src \ && wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.12.3-1.el7.centos.noarch.rpm \ && wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.12.3-1.el7.centos.x86_64.rpm \ && wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-debuginfo-1.12.3-1.el7.centos.x86_64.rpm \ && yum localinstall -y docker-engine-selinux-1.12.3-1.el7.centos.noarch.rpm docker-engine-1.12.3-1.el7.centos.x86_64.rpm docker-engine-debuginfo-1.12.3-1.el7.centos.x86_64.rpm
2.二、 驗證Docker是否安裝成功
Centos7中
Docker1.12中默認使用Docker做爲客戶端程序,使用dockerd做爲服務端程序。
[root@c1 _src]# docker version Client: Version: 1.12.3 API version: 1.24 Go version: go1.6.3 Git commit: 6b644ec Built: OS/Arch: linux/amd64 Cannot connect to the Docker daemon. Is the docker daemon running on this host?
2.三、啓動Docker daemon程序
在Docker1.12中,默認的daemon程序是
dockerd,能夠執行dockerd或者使用系統自帶systemd去管理服務。可是須要注意的是,默認用的都是默認的參數,好比私有網段默認使用172.17.0.0/16 ,網橋使用docker0等等
[root@c1 _src]# dockerd INFO[0000] libcontainerd: new containerd process, pid: 6469 WARN[0000] containerd: low RLIMIT_NOFILE changing to max current=1024 max=4096 WARN[0001] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section. WARN[0001] devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem will be ignored. INFO[0001] [graphdriver] using prior storage driver "devicemapper" INFO[0001] Graph migration to content-addressability took 0.00 seconds WARN[0001] mountpoint for pids not found INFO[0001] Loading containers: start. INFO[0001] Firewalld running: true INFO[0001] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address INFO[0001] Loading containers: done. INFO[0001] Daemon has completed initialization INFO[0001] Docker daemon commit=6b644ec graphdriver=devicemapper version=1.12.3 INFO[0001] API listen on /var/run/docker.sock
2.三、經過系統自帶的systemctl啓動docker,並啓動docker服務
[root@c1 _src]# systemctl enable docker && systemctl start docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
使用dockerd --help查看啓動參數
[root@c1 _src]# dockerd --help Usage: dockerd [OPTIONS] A self-sufficient runtime for containers. Options: --add-runtime=[] Register an additional OCI compatible runtime --api-cors-header Set CORS headers in the remote API --authorization-plugin=[] Authorization plugins to load -b, --bridge #指定容器使用的網絡接口,默認爲docker0,也能夠指定其餘網絡接口 --bip #指定橋接地址,即定義一個容器的私有網絡 --cgroup-parent #爲全部的容器指定父cgroup --cluster-advertise #爲集羣設定一個地址或者名字 --cluster-store #後端分佈式存儲的URL --cluster-store-opt=map[] #設置集羣存儲參數 --config-file=/etc/docker/daemon.json #指定配置文件 -D #啓動debug模式 --default-gateway #爲容器設定默認的ipv4網關(--default-gateway-v6) --dns=[] #設置dns --dns-opt=[] #設置dns參數 --dns-search=[] #設置dns域 --exec-opt=[] #運行時附加參數 --exec-root=/var/run/docker #設置運行狀態文件存儲目錄 --fixed-cidr #爲ipv4子網綁定ip -G, --group=docker #設置docker運行時的屬組 -g, --graph=/var/lib/docker #設置docker運行時的家目錄 -H, --host=[] #設置docker程序啓動後套接字鏈接地址 --icc=true #是內部容器能夠互相通訊,環境中須要禁止內部容器訪問 --insecure-registry=[] #設置內部私有註冊中心地址 --ip=0.0.0.0 #當映射容器端口的時候默認的ip(這個應該是在多主機網絡的時候會比較有用) --ip-forward=true #使net.ipv4.ip_forward生效,其實就是內核裏面forward --ip-masq=true #啓用ip假裝技術(容器訪問外部程序默認不會暴露本身的ip) --iptables=true #啓用容器使用iptables規則 -l, --log-level=info #設置日誌級別 --live-restore #啓用熱啓動(重啓docker,保證容器一直運行1.12新特性) --log-driver=json-file #容器日誌默認的驅動 --max-concurrent-downloads=3 #爲每一個pull設置最大併發下載 --max-concurrent-uploads=5 #爲每一個push設置最大併發上傳 --mtu #設置容器網絡的MTU --oom-score-adjust=-500 #設置內存oom的平分策略(-1000/1000) -p, --pidfile=/var/run/docker.pid #指定pid所在位置 -s, --storage-driver #設置docker存儲驅動 --selinux-enabled #啓用selinux的支持 --storage-opt=[] #存儲參數驅動 --swarm-default-advertise-addr #設置swarm默認的node節點 --tls #使用tls加密 --tlscacert=~/.docker/ca.pem #配置tls CA 認證 --tlscert=~/.docker/cert.pem #指定認證文件 --tlskey=~/.docker/key.pem #指定認證keys --userland-proxy=true #爲迴環接口使用用戶代理 --userns-remap #爲用戶態的namespaces設定用戶或組
2.四、修改docker的配置文件
如下以
c1爲例,在ExecStart後面加上咱們自定義的參數,其中三臺機器也要作同步修改
[root@c1 ~]# vi /lib/systemd/system/docker.service
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # Overlayfs跟AUFS很像,可是性能比AUFS好,有更好的內存利用。 # 加上阿里雲的docker加速 ExecStart=/usr/bin/dockerd -s=overlay --registry-mirror=https://7rgqloza.mirror.aliyuncs.com --insecure-registry=localhost:5000 -H unix:///var/run/docker.sock --pidfile=/var/run/docker.pid ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target
重啓docker服務,以保證新的配置生效
[root@c1 ~]# systemctl daemon-reload && systemctl restart docker.service
三、建立swarm 集羣
10.0.0.31 (hostname:c1)做爲swarm manager1
10.0.0.32 (hostname:c2)作爲swarm manager2
10.0.0.33 (hostname:c3)作爲swarm agent1
10.0.0.34 (hostname:c4)作爲swarm agent2
3.一、開放firewall防火牆端口
在配置集羣前要先開放防火牆的端口,將下面的命令,複製、粘貼到4臺機器的命令行中執行。
firewall-cmd --zone=public --add-port=2377/tcp --permanent && \ firewall-cmd --zone=public --add-port=7946/tcp --permanent && \ firewall-cmd --zone=public --add-port=7946/udp --permanent && \ firewall-cmd --zone=public --add-port=4789/tcp --permanent && \ firewall-cmd --zone=public --add-port=4789/udp --permanent && \ firewall-cmd --reload
以
c1爲例,查看端口開放狀況
[root@c1 ~]# firewall-cmd --list-ports 4789/tcp 4789/udp 7946/tcp 2377/tcp 7946/udp
3.二、設置swarm集羣並將其餘3臺機器添加到集羣
在
c1上初始化swarm集羣,用--listen-addr指定監聽的ip與端口
[root@c1 ~]# docker swarm init --listen-addr 0.0.0.0 Swarm initialized: current node (73ju72f6nlyl9kiib7z5r0bsk) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-4f1xl8ici0o32qfyru9y6wepv \ 10.0.0.31:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
使用
docker swarm join-token manager能夠查看加入爲swarm manager的token
查看結果,能夠看到咱們如今只有一個節點
[root@c1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 73ju72f6nlyl9kiib7z5r0bsk * c1 Ready Active Leader
經過如下命令,咱們將另外3臺機器,加入到集羣中,將下面的命令,複製、粘貼到
c1的命令行中
for N in $(seq 2 4); \ do ssh c$N \ docker swarm join \ --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-4f1xl8ici0o32qfyru9y6wepv \ 10.0.0.31:2377 \ ;done
再次查看集羣節點狀況,能夠看到其餘機器已經添加到集羣中,而且
c1是leader狀態
[root@c1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 2qn7aw9ihbjphtnm1toaoevq8 c4 Ready Active 4cxm0w5j3x4mqrj8f1kdrgln5 * c1 Ready Active Leader 4wqpz2v3b71q0ohzdifi94ma9 c2 Ready Active 9t9ceme3w14o4gfnljtfrkpgp c3 Ready Active
將
c2也設置爲集羣的主節點,先在c1上查看加入到主節點的token
[root@c1 ~]# docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-b7k3agnzez1bjj3nfz2h93xh0 \ 10.0.0.31:2377
根據
c1的token信息,咱們先在c2上脫離集羣,再將c2加入到管理者
[root@c2 ~]# docker swarm leave Node left the swarm. [root@c2 ~]# docker swarm join \ > --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-b7k3agnzez1bjj3nfz2h93xh0 \ > 10.0.0.31:2377 This node joined a swarm as a manager.
這時咱們在
c1和c2任意一臺機器,輸入docker node ls都可以看到最新的集羣節點狀態,這時c2的MANAGER STATUS已經變爲了Reachable
[root@c1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 2qn7aw9ihbjphtnm1toaoevq8 c4 Ready Active 4cxm0w5j3x4mqrj8f1kdrgln5 * c1 Ready Active Leader 4wqpz2v3b71q0ohzdifi94ma9 c2 Down Active 9t9ceme3w14o4gfnljtfrkpgp c3 Ready Active ai6peof1e9wyovp8uxn5b2ufe c2 Ready Active Reachable
由於以前咱們是使用
docker swarm leave,因此早期的c2的狀態是Down,能夠經過 docker node rm <ID>命令刪除掉
3.三、建立一個overlay 網絡
單臺服務器的時候咱們應用全部的容器都跑在一臺主機上, 因此容器之間的網絡是可以互通的. 如今咱們的集羣有4臺主機,如何保證不一樣主機以前的docker是互通的呢?
swarm集羣已經幫咱們解決了這個問題了,就是隻用overlay network.
在
docker 1.12之前, swarm集羣須要一個額外的key-value存儲(consul, etcd etc). 來同步網絡配置, 保證全部容器在同一個網段中. 在docker 1.12已經內置了這個存儲, 集成了overlay networks的支持.
查看原有網絡
[root@c1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 23ee2bb5a2a1 bridge bridge local fd17ed8db4d8 docker_gwbridge bridge local 6878c36aa311 host host local 08tt2s4pqf96 ingress overlay swarm 7c18e57e24f2 none null local
能夠看到在swarm上默認已有一個名爲ingress的overlay 網絡,默認在swarm裏使用,本文會建立一個新的
建立一個名爲
idoall-org的overlay網絡
[root@c1 ~]# docker network create --subnet=10.0.9.0/24 --driver overlay idoall-org e63ca0d7zcbxqpp4svlv5x04v [root@c1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 5e47ba02a985 bridge bridge local fd17ed8db4d8 docker_gwbridge bridge local 6878c36aa311 host host local e63ca0d7zcbx idoall-org overlay swarm 08tt2s4pqf96 ingress overlay swarm 7c18e57e24f2 none null local
新的網絡(idoall-org)已建立
--subnet 用於指定建立overlay網絡的網段,也能夠省略此參數
可使用
docker network inspect idoall-org查看咱們添加的網絡信息
[root@c1 ~]# docker network inspect idoall-org [ { "Name": "idoall-org", "Id": "e63ca0d7zcbxqpp4svlv5x04v", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.9.0/24", "Gateway": "10.0.9.1" } ] }, "Internal": false, "Containers": null, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "257" }, "Labels": null } ]
3.四、在網絡上運行容器
用
alpine鏡像在idoall-org網絡上啓動3個實例
[root@c1 ~]# docker service create --name idoall-org-test-ping --replicas 3 --network=idoall-org alpine ping baidu.com avcrdsntx8b8ei091lq5cl76y [root@c1 ~]# docker service ps idoall-org-test-ping ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 42vigh5lxkvgge9zo27hfah88 idoall-org-test-ping.1 alpine c4 Running Starting 1 seconds ago aovr8r7r7lykzmxqs30e8s4ee idoall-org-test-ping.2 alpine c3 Running Starting 1 seconds ago c7pv2o597qycsqzqzgjwwtw8b idoall-org-test-ping.3 alpine c1 Running Running 3 seconds ago
能夠看到3個實例,分別部署在
c一、c三、c4三臺機器上
也可使用
--mode golbal 指定service運行在每一個swarm節點上,稍後會有介紹
3.五、擴展(Scaling)應用
假設在程序運行的時候,發現資源不夠用,咱們可使用
scale進行擴展,如今有3個實例,咱們更改成4個實例
[root@c1 ~]# docker service scale idoall-org-test-ping=4 idoall-org-test-ping scaled to 4 [root@c1 ~]# docker service ps idoall-org-test-ping ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 42vigh5lxkvgge9zo27hfah88 idoall-org-test-ping.1 alpine c4 Running Running 4 minutes ago aovr8r7r7lykzmxqs30e8s4ee idoall-org-test-ping.2 alpine c3 Running Running 4 minutes ago c7pv2o597qycsqzqzgjwwtw8b idoall-org-test-ping.3 alpine c1 Running Running 4 minutes ago 72of5dfm67duccxsdyt1e25qd idoall-org-test-ping.4 alpine c2 Running Running 1 seconds ago
3.六、對service服務進行指定運行
在上面的案例中,無論你的實例是幾個,是由swarm自動調度定義執行在某個節點上。咱們能夠經過在建立service的時候可使用
--constraints參數,來對service進行限制,例如咱們指定一個服務在c4上運行:
[root@c1 ~]# docker service create \ --network idoall-org \ --name idoall-org \ --constraint 'node.hostname==c4' \ -p 9000:9000 \ idoall/golang-revel
服務啓動之後,經過瀏覽http://10.0.0.31:9000/,或者31-34的任意IP,均可以看到效果,Docker Swarm會自動作負載均衡,稍後會介紹關於Docker Swarm的負載均衡
因爲各地的網絡不一樣,下載鏡像可能有些慢,可使用下面的命令,對命名爲
idoall-org的鏡像進行監控
[root@c1 ~]# watch docker service ps idoall-org
除了
hostname也可使用其餘節點屬性來建立約束表達式寫法參見下表:
節點屬性
|
匹配
|
示例
|
|
節點 ID
|
|
node.hostname
|
節點 hostname
|
node.hostname != c2
|
node.role
|
節點 role: manager
|
node.role == manager
|
node.labels
|
用戶自定義 node labels
|
node.labels.security == high
|
engine.labels
|
Docker Engine labels
|
engine.labels.operatingsystem == ubuntu 14.04
|
咱們也能夠經過
docker node update命令,來爲機器添加label,例如:
[root@c1 ~]# docker node update --label-add site=idoall-org c1 [root@c2 ~]# docker node inspect c1 [ { "ID": "4cxm0w5j3x4mqrj8f1kdrgln5", "Version": { "Index": 108 }, "CreatedAt": "2016-12-11T11:13:32.495274292Z", "UpdatedAt": "2016-12-11T12:00:05.956367412Z", "Spec": { "Labels": { "site": "idoall-org" ... ]
對於已有service, 能夠經過
docker service update,添加constraint配置, 例如:
[root@c1 ~]# docker service update registry --constraint-add 'node.labels.site==idoall-org'
3.七、測試docker swarm網絡是否能互通
在
c1上執行
[root@c1 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c683692b0153 alpine:latest "ping baidu.com" 25 minutes ago Up 25 minutes idoall-org-test-ping.4.c7pv2o597qycsqzqzgjwwtw8b [root@c1 ~]# docker exec -it 47e5 sh / # ping idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm PING idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm (10.0.9.8): 56 data bytes 64 bytes from 10.0.9.8: seq=0 ttl=64 time=1.080 ms 64 bytes from 10.0.9.8: seq=1 ttl=64 time=1.349 ms 64 bytes from 10.0.9.8: seq=2 ttl=64 time=1.026 ms
idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm是容器在c4上運行的名稱
在使用exec進入容器的時候,能夠只輸入容器id的前4位
在
c4上執行
[root@c4 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1ead9bb757a0 idoall/docker-golang1.7.4-revel0.13:latest "/usr/bin/supervisord" About a minute ago Up 58 seconds idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm 033531b30b79 alpine:latest "ping baidu.com" About a minute ago Up About a minute idoall-org-test-ping.1.6st5xvehh7c3bwaxsen3r4gpn [root@c2 ~]# docker exec -it f49c435c94ea sh bash-4.3# ping idoall-org-test-ping.4.cirnop0kxbuxiyjh87ii6hh4x PING idoall-org-test-ping.4.cirnop0kxbuxiyjh87ii6hh4x (10.0.9.6): 56 data bytes 64 bytes from 10.0.9.6: seq=0 ttl=64 time=0.531 ms 64 bytes from 10.0.9.6: seq=1 ttl=64 time=0.700 ms 64 bytes from 10.0.9.6: seq=2 ttl=64 time=0.756 ms
3.八、測試dokcer swarm自帶的負載均衡
使用
--mode global參數,在每一個節點上建立一個web服務
[root@c1 ~]# docker service create --name whoami --mode global -p 8000:8000 jwilder/whoami 1u87lrzlktgskt4g6ae30xzb8 [root@c1 ~]# docker service ps whoami ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR cjf5w0pv5bbrph2gcvj508rvj whoami jwilder/whoami c2 Running Running 16 minutes ago dokh8j4z0iuslye0qa662axqv \_ whoami jwilder/whoami c3 Running Running 16 minutes ago dumjwz4oqc5xobvjv9rosom0w \_ whoami jwilder/whoami c1 Running Running 16 minutes ago bbzgdau14p5b4puvojf06gn5s \_ whoami jwilder/whoami c4 Running Running 16 minutes ago
在任意一臺機器上執行如下命令,能夠發現,每次獲取到的都是不一樣的值,超過4次之後,會繼續輪詢到第1臺機器
[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000 I'm 8c2eeb5d420f [root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000 I'm 0b56c2a5b2a4 [root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000 I'm 000982389fa0 [root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000 I'm db8d3e839de5 [root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000 I'm 8c2eeb5d420f