Docker-05-跨主機overlay網絡

1、多容器複雜應用的部署

程序介紹:一個簡單的python程序,可是須要去鏈接redis獲取數據,並會進行數據的寫入操做。這是在單臺主機的操做,爲接下來的跨主機網路通訊作準備。node

 

1.1 準備python程序

注意程序裏面鏈接的REDIS主機的配置爲REDIS_HOST的一個變量!python

[root@docker01 chapter4]# cat app.py 
from flask import Flask
from redis import Redis
import os
import socket

app = Flask(__name__)
redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379)


@app.route('/')
def hello():
    redis.incr('hits')
    return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())


if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000, debug=True)

1.2 編寫Dockerfile,並製做鏡像

  • 第一步:編寫dockerfile
FROM python:2.7
LABEL maintainer="this is test message"
COPY . /app/
WORKDIR /app
RUN pip install flask redis
EXPOSE 5000
CMD ["python","app.py"]
  • 第二步:經過dockerfile生成image
docker build -t flask-redis .

1.3 運行兩個容器

  • 第一步:運行一個redis容器
[root@docker01 chapter4]# docker run -d --name redis redis
  • 第二步:運行app容器
#經過-e參數指定環境變量
docker run -d -p 5000:5000 --link redis -e REDIS_HOST=redis --name flask-redis flask-redis
  • 第三步:結果查看
#進入容器內部,並訪問5000端口
root@a116f14ec6c0:/app# curl 127.0.0.1:5000
Hello Container World! I have been seen 1 times and my hostname is a116f14ec6c0.
root@a116f14ec6c0:/app# curl 127.0.0.1:5000
Hello Container World! I have been seen 2 times and my hostname is a116f14ec6c0.
root@a116f14ec6c0:/app# curl 127.0.0.1:5000
Hello Container World! I have been seen 3 times and my hostname is a116f14ec6c0.
root@a116f14ec6c0:/app# curl 127.0.0.1:5000
Hello Container World! I have been seen 4 times and my hostname is a116f14ec6c0.
root@a116f14ec6c0:/app# curl 127.0.0.1:5000
Hello Container World! I have been seen 5 times and my hostname is a116f14ec6c0.

#在外部訪問宿主機5000端口
[root@docker01 chapter4]# curl 192.168.1.38:5000
Hello Container World! I have been seen 6 times and my hostname is a116f14ec6c0.
[root@docker01 chapter4]# curl 192.168.1.38:5000
Hello Container World! I have been seen 7 times and my hostname is a116f14ec6c0.
[root@docker01 chapter4]# curl 192.168.1.38:5000
Hello Container World! I have been seen 8 times and my hostname is a116f14ec6c0.
[root@docker01 chapter4]# curl 192.168.1.38:5000
Hello Container World! I have been seen 9 times and my hostname is a116f14ec6c0.
[root@docker01 chapter4]# curl 192.168.1.38:5000
Hello Container World! I have been seen 10 times and my hostname is a116f14ec6c0.

 

 2、Docker跨主機通訊

實驗環境:linux

 

序號 主機名 IP地址
1 docker01 192.168.1.38
2 docker02 192.168.1.39


2.1 overlay網絡介紹

  Docerk overlay 網絡須要一個 key-value 數據庫用於保存網絡狀態信息,包括 Network、Endpoint、IP 等,Consul、Etcd 和 ZooKeeper 都是 Docker 支持的 key-vlaue 軟件。在這裏介紹etcd以及console兩種數據庫的搭建方法!git

2.2 搭建etcd數據庫

  • 第一步:在docker01上運行以下命令
cd /usr/local/src/
wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
tar zxvf etcd-v3.0.12-linux-amd64.tar.gz
cd etcd-v3.0.12-linux-amd64
./etcd --name docker01 --initial-advertise-peer-urls http://192.168.1.38:2380 \
--listen-peer-urls http://192.168.1.38:2380 \
--listen-client-urls http://192.168.1.38:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://192.168.1.38:2379 \
--initial-cluster-token etcd-cluster \
--initial-cluster docker01=http://192.168.1.38:2380,docker02=http://192.168.1.39:2380 \
--initial-cluster-state new &
  • 第二步:在docker02上執行以下命令
cd /usr/local/src/
wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
tar zxvf etcd-v3.0.12-linux-amd64.tar.gz
cd etcd-v3.0.12-linux-amd64
./etcd --name docker02 --initial-advertise-peer-urls http://192.168.1.39:2380 \
--listen-peer-urls http://192.168.1.39:2380 \
--listen-client-urls http://192.168.1.39:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://192.168.1.39:2379 \
--initial-cluster-token etcd-cluster \
--initial-cluster docker01=http://192.168.1.38:2380,docker02=http://192.168.1.39:2380 \
--initial-cluster-state new &
  • 第三步:分別在兩個節點檢查etcd cluster狀態
#在docker01上檢查
[root@docker01 etcd-v3.0.12-linux-amd64]# cd /usr/local/src/etcd-v3.0.12-linux-amd64
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl cluster-health
member 54938145269cc13b is healthy: got healthy result from http://192.168.1.39:2379
member d243f77ba7647e92 is healthy: got healthy result from http://192.168.1.38:2379
cluster is healthy

#在docker02上檢查
[root@docker02 etcd-v3.0.12-linux-amd64]# cd /usr/local/src/etcd-v3.0.12-linux-amd64
[root@docker02 etcd-v3.0.12-linux-amd64]# ./etcdctl cluster-health
member 54938145269cc13b is healthy: got healthy result from http://192.168.1.39:2379
member d243f77ba7647e92 is healthy: got healthy result from http://192.168.1.38:2379
cluster is healthy
  • 第四步:修改/etc/docker/daemon.json配置文件
#docker01修改後內容以下
{
  "registry-mirrors": ["https://f0lt06pg.mirror.aliyuncs.com"],
  "dns": ["8.8.8.8","223.5.5.5"],
  "data-root": "/data/docker",
  "cluster-store": "etcd://192.168.1.38:2379",
  "cluster-advertise": "192.168.1.38:2375"
}


#docker02修改後內容以下
{
  "registry-mirrors": ["https://f0lt06pg.mirror.aliyuncs.com"],
  "dns": ["8.8.8.8","223.5.5.5"],
  "data-root": "/data/docker",
  "cluster-store": "etcd://192.168.1.39:2379",
  "cluster-advertise": "192.168.1.39:2375"
}
  • 第五步:重啓docker服務
systemctl daemon-reload
systemctl restart docker.service

2.3 搭建Consul數據庫(etcd和consul選擇其一便可)

  •  第一步:修改/etc/docker/daemon.json配置文件
#docker01的配置
[root@docker01 docker]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://f0lt06pg.mirror.aliyuncs.com"],
  "dns": ["8.8.8.8","223.5.5.5"],
  "data-root": "/data/docker",
  "cluster-store": "consul://192.168.1.38:8500"
}

#docker02的配置
[root@docker02 docker]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://f0lt06pg.mirror.aliyuncs.com"],
  "dns": ["8.8.8.8","223.5.5.5"],
  "data-root": "/data/docker",
  "cluster-store": "consul://192.168.1.38:8500",
  "cluster-advertise": "192.168.1.39:2375"
  • 第二步:在docker01上運行consul容器
docker run -d -p 8500:8500 --name consul progrium/consul -server -bootstrap
  • 第三步:重啓docker服務
sudo systemctl daemon-reload
sudo systemctl restart docker
  • 第四步:瀏覽器驗證

訪問地址:http://192.168.1.38:8500github

2.4 建立overlay網絡

  • 第一步:在docker01上建立一個damo的overlay network
[root@docker01 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9c92b0248bc2        bridge              bridge              local
d12ebb4b73d8        host                host                local
c2fb11041077        none                null                local
[root@docker01 ~]# docker network create -d overlay demo
41149db31f6e74074b015c29a234cfda680a882717e4372e5499df175ee3b34d
[root@docker01 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9c92b0248bc2        bridge              bridge              local
41149db31f6e        demo                overlay             global
d12ebb4b73d8        host                host                local
c2fb11041077        none                null                local
  • 第二步:已經能夠在node2上看到這個overlay網絡了
[root@docker02 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b26e09d0d6a9        bridge              bridge              local
41149db31f6e        demo                overlay             global
b111f83b1407        host                host                local
3ae0f95a75f8        none                null                local
  • 第三步:查看demo網絡的詳細信息
[root@docker01 ~]# docker network inspect demo
[
    {
        "Name": "demo",
        "Id": "41149db31f6e74074b015c29a234cfda680a882717e4372e5499df175ee3b34d",
        "Created": "2019-03-31T00:32:44.9129614+08:00",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
  • 第三步:若是是經過etcd建立的,能夠查看etcd的key-value
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls
/docker
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker
/docker/nodes
/docker/network
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/nodes/
/docker/nodes/192.168.1.39:2375
/docker/nodes/192.168.1.38:2375
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/
/docker/network/v1.0
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0/
/docker/network/v1.0/idm
/docker/network/v1.0/overlay
/docker/network/v1.0/network
/docker/network/v1.0/endpoint_count
/docker/network/v1.0/endpoint
/docker/network/v1.0/ipam
[root@docker01 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0/network/
/docker/network/v1.0/network/41149db31f6e74074b015c29a234cfda680a882717e4372e5499df175ee3b34d

2.5 overlay網絡使用案例介紹

這裏就使用第一章節的flask-redis來作實驗,在docke01上部署redis容器,在docker02上部署app容器,並保證能正常運行!redis

  • 第一步:在docker01上運行redis容器,並加入到demo網絡
docker run -d --name redis --network demo redis
  • 第二步:在docker02上運行flask-redis容器,也加入到demo網絡
docker run -d -p 5000:5000 --network demo -e REDIS_HOST=redis --name flask-redis flask-redis 
  • 第三步:訪問docker02的5000端口,查看結果。若是出現和第一章相同結果,說明跨主機網絡已互通!
[root@docker01 etcd-v3.0.12-linux-amd64]# curl 192.168.1.39:5000
Hello Container World! I have been seen 1 times and my hostname is 420016e250d4.
[root@docker01 etcd-v3.0.12-linux-amd64]# curl 192.168.1.39:5000
Hello Container World! I have been seen 2 times and my hostname is 420016e250d4.
[root@docker01 etcd-v3.0.12-linux-amd64]# curl 192.168.1.39:5000
Hello Container World! I have been seen 3 times and my hostname is 420016e250d4.
[root@docker01 etcd-v3.0.12-linux-amd64]# curl 192.168.1.39:5000
Hello Container World! I have been seen 4 times and my hostname is 420016e250d4.
[root@docker01 etcd-v3.0.12-linux-amd64]# curl 192.168.1.39:5000
Hello Container World! I have been seen 5 times and my hostname is 420016e250d4.

 docker跨主機通訊就介紹到此~~docker

相關文章
相關標籤/搜索