ETCD分佈式存儲部署

1、ETCD 概述

ETCD 是一個分佈式一致性k-v存儲系統,可用於服務註冊發現與共享配置。具備一下優勢:node

  • 簡單: 相比於晦澀難懂的paxos算法,etcd基於相對簡單且易實現的raft算法實現一致性,並經過gRPC提供接口調用
  • 安全:支持TLS通訊,並能夠針對不一樣的用戶進行對key的讀寫控制
  • 高性能:10,000/秒的寫性能

2、overlay網絡模式

容器在兩個跨主機通訊的時候,是使用overlay network這個網絡模式進行通訊,若是使用host也能夠實現跨主機進行通訊,直接使用這個物理的ip地址就能夠進行通訊。overlay它會虛擬出一個網絡好比10.0.9.3 這個ip地址,在這個overlay網絡模式裏面,有一個相似於服務網關的地址,而後這個包轉發到物理服務器這個地址,最終經過路由和交換,到達另外一個服務器的ip地址。linux

在docker容器裏面overlay是怎麼實現的呢?git

咱們會有一個服務發現,好比說consul,會定義一個ip地址池,好比10.0.9.0/24 之類的,上面會有容器,容器ip地址會從上面去獲取,獲取完了後,會經過eth1進行通訊,賊這實現跨主機的東西。github

3、部署ETCD集羣

node1部署算法

[root@node1 ~]# wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
[root@node1 ~]# tar zxf etcd-v3.0.12-linux-amd64.tar.gz
[root@node1 ~]# cd etcd-v3.0.12-linux-amd64/
[root@node1 etcd-v3.0.12-linux-amd64]# nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://10.211.55.12:2380 \
> --listen-peer-urls http://10.211.55.12:2380 \
> --listen-client-urls http://10.211.55.12:2379,http://127.0.0.1:2379 \
> --advertise-client-urls http://10.211.55.12:2379 \
> --initial-cluster-token etcd-cluster \
> --initial-cluster docker-node1=http://10.211.55.12:2380,docker-node2=http://10.211.55.13:2380 \
> --initial-cluster-state new&
[1] 32505

node2部署docker

[root@node2 ~]# wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
[root@node2 ~]# tar zxf etcd-v3.0.12-linux-amd64.tar.gz
[root@node2 ~]# cd etcd-v3.0.12-linux-amd64/
[root@node2 etcd-v3.0.12-linux-amd64]# nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://10.211.55.13:2380 \
> --listen-peer-urls http://10.211.55.13:2380 \
> --listen-client-urls http://10.211.55.13:2379,http://127.0.0.1:2379 \
> --advertise-client-urls http://10.211.55.13:2379 \
> --initial-cluster-token etcd-cluster \
> --initial-cluster docker-node1=http://10.211.55.12:2380,docker-node2=http://10.211.55.13:2380 \
> --initial-cluster-state new&
[1] 19240

檢查cluster狀態vim

[root@node2 etcd-v3.0.12-linux-amd64]# ./etcdctl cluster-health
member 98da03f1eca9d9d is healthy: got healthy result from http://10.211.55.12:2379
member 63a987e985acb514 is healthy: got healthy result from http://10.211.55.13:2379
cluster is healthy

參數說明安全

參數說明: 
● –data-dir 指定節點的數據存儲目錄,若不指定,則默認是當前目錄。這些數據包括節點ID,集羣ID,集羣初始化配置,Snapshot文件,若未指 定–wal-dir,還會存儲WAL文件 
● –wal-dir 指定節點的was文件存儲目錄,若指定了該參數,wal文件會和其餘數據文件分開存儲 
● –name 節點名稱 
● –initial-advertise-peer-urls 告知集羣其餘節點的URL,tcp2380端口用於集羣通訊 
● –listen-peer-urls 監聽URL,用於與其餘節點通信 
● –advertise-client-urls 告知客戶端的URL, 也就是服務的URL,tcp2379端口用於監聽客戶端請求 
● –initial-cluster-token 集羣的ID 
● –initial-cluster 集羣中全部節點 
● –initial-cluster-state 集羣狀態,new爲新建立集羣,existing爲已存在的集羣

4、管理etcd集羣

1.查看集羣版本

# etcdctl  --version
# etcdctl  --help
1
2
2.查看集羣健康狀態

# etcdctl cluster-health
1
3.查看集羣成員

# etcdctl  member  list
1
在任一節點上執行,能夠看到集羣的節點狀況,並能看出哪一個是leader節點


4.更新一個節點 
若是你想更新一個節點的IP(peerURLS),首先你須要知道那個節點的ID

# etcdctl member list
# etcdctl member update memberID http://ip:2380
1
2
5.刪除一個節點(Etcd集羣成員的縮)

#  etcdctl  member  list
#  etcdctl  member  remove  memberID
#  etcdctl  member  list
#  ps -ef|grep etcd          //在相關節點上kill掉etcd進程
1
2
3
4
6.增長一個新節點(Etcd集羣成員的伸) 
注意:步驟很重要,否則會報集羣ID不匹配

# etcdctl member add --help
1
a. 將目標節點添加到集羣

# etcdctl member add etcd3 http://10.1.2.174:2380

Addedmember named etcd3 with ID 28e0d98e7ec15cd4 to cluster
ETCD_NAME="etcd3"
ETCD_INITIAL_CLUSTER="etcd0=http://10.1.2.61:2380,etcd1=http://10.1.2.172:2380,etcd2=http://10.1.2.173:2380,etcd3=http://10.1.2.174:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
1
2
3
4
5
6
b. 查看新增成員列表,etcd3狀態爲unstarted

# etcdctl member list
d4f257d2b5f99b64[unstarted]:peerURLs=http://10.1.2.174:2380
1
2
c. 清空目標節點etcd3的data-dir 
節點刪除後,集羣中的成員信息會更新,新節點是做爲一個全新的節點加入集羣,若是data-dir有數據,etcd啓動時會讀取己經存在的數據,仍然用老的memberID會形成沒法加入集羣,因此必定要清空新節點的data-dir。

# rm  -rf  /path/to/etcd/data
1

d. 在目標節點上啓動新增長的成員 
這裏的initial標記必定要指定爲existing,若是爲new,則會自動生成一個新的memberID,這和前面添加節點時生成的ID不一致,故日誌中會報節點ID不匹配的錯。

# vim  etcd3.sh
1
修改etcd3.sh,腳本中–advertise-client-urls 和 –initial-advertis-peer-urls 參數修改成etcd3的,–initial-cluster-state改成existing

# nohup   ./etcd3.sh  &

# etcdctl  member  list

5、Docker使用ETCD分佈式存儲

node1服務器

[root@node1 ~]#service docker stop
[root@node1 ~]# /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://10.211.55.12:2379 --cluster-advertise=10.211.55.12:2375&

node2網絡

[root@node2 ~]#service docker stop
[root@node2 ~]# /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://10.211.55.13:2379 --cluster-advertise=10.211.55.13:2375&

6、建立overlay network

在node1上建立一個demo的overlay network

[root@node1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
550d5b450fe3        bridge              bridge              local
cca92be73cc6        host                host                local
d21360748bfc        none                null                local
[root@node1 ~]# docker network create -d overlay demo
97e959031044ec634d61d2e721cb0348d7ff852af3f575d75d2988c07e0f9846
[root@node1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
550d5b450fe3        bridge              bridge              local
97e959031044        demo                overlay             global
cca92be73cc6        host                host                local
d21360748bfc        none                null                local
[root@node1 ~]# docker network inspect demo
[
    {
        "Name": "demo",
        "Id": "97e959031044ec634d61d2e721cb0348d7ff852af3f575d75d2988c07e0f9846",
        "Created": "2018-08-01T22:22:01.958142468+08:00",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

咱們會看到在node2上,這個demo的overlay network會被同步建立

[root@node2 etcd-v3.0.12-linux-amd64]# cd
[root@node2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c6af37ef6765        bridge              bridge              local
97e959031044        demo                overlay             global
de15cdab46b0        host                host                local
cc3ec612fd29        none                null                local


# 說明etcd分佈式存儲已經起做用,兩個節點數據已同步

經過查看etcd的key-value, 咱們獲取到,這個demo的network是經過etcd從node1同步到node2的

[root@node2 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker
/docker/nodes
/docker/network

[root@node2 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/nodes
/docker/nodes/10.211.55.12:2375
/docker/nodes/10.211.55.13:2375
[root@node2 etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0/network
/docker/network/v1.0/network/97e959031044ec634d61d2e721cb0348d7ff852af3f575d75d2988c07e0f9846
[root@node2 etcd-v3.0.12-linux-amd64]# ./etcdctl get /docker/network/v1.0/network/97e959031044ec634d61d2e721cb0348d7ff852af3f575d75d2988c07e0f9846
{"addrSpace":"GlobalDefault","attachable":false,"configFrom":"","configOnly":false,"created":"2018-08-01T22:22:01.958142468+08:00","enableIPv6":false,"generic":{"com.docker.network.enable_ipv6":false,"com.docker.network.generic":{}},"id":"97e959031044ec634d61d2e721cb0348d7ff852af3f575d75d2988c07e0f9846","inDelete":false,"ingress":false,"internal":false,"ipamOptions":{},"ipamType":"default","ipamV4Config":"[{\"PreferredPool\":\"\",\"SubPool\":\"\",\"Gateway\":\"\",\"AuxAddresses\":null}]","ipamV4Info":"[{\"IPAMData\":\"{\\\"AddressSpace\\\":\\\"GlobalDefault\\\",\\\"Gateway\\\":\\\"10.0.0.1/24\\\",\\\"Pool\\\":\\\"10.0.0.0/24\\\"}\",\"PoolID\":\"GlobalDefault/10.0.0.0/24\"}]","labels":{},"loadBalancerIP":"","name":"demo","networkType":"overlay","persist":true,"postIPv6":false,"scope":"global"}

7、建立鏈接demo網絡的容器

node1

[root@node1 ~]# docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
8c5a7da1afbc: Pull complete
Digest: sha256:cb63aa0641a885f54de20f61d152187419e8f6b159ed11a251a09d115fdff9bd
Status: Downloaded newer image for busybox:latest
4bc3ab1cb7d838e8ef314618e6d3d878e744ef7842196a00b3999e6b6fe8402f
ERRO[2018-08-01T22:26:33.105124642+08:00] Failed to deserialize netlink ndmsg: invalid argument
INFO[0379] shim docker-containerd-shim started           address="/containerd-shim/moby/4bc3ab1cb7d838e8ef314618e6d3d878e744ef7842196a00b3999e6b6fe8402f/shim.sock" debug=false pid=6630
[root@node1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
4bc3ab1cb7d8        busybox             "sh -c 'while true; …"   26 seconds ago      Up 24 seconds                           test1
[root@node1 ~]# docker exec test1 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02
          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:31 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4045 (3.9 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

node2

[root@node2 ~]# docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
56bec22e3559: Pull complete
Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
Status: Downloaded newer image for busybox:latest
fad6dc6538a85d3dcc958e8ed7b1ec3810feee3e454c1d3f4e53ba25429b290b
docker: Error response from daemon: service endpoint with name test1 already exists.    # 容器已存在
[root@node2 ~]# docker run -d --name test2 --net demo busybox sh -c "while true; do sleep 3600; done"
9d494a2f66a69e6b861961d0c6af2446265bec9b1d273d7e70d0e46eb2e98d20

驗證連通性

[root@node2 etcd-v3.0.12-linux-amd64]# docker exec -it test2 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:03
          inet addr:10.0.0.3  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4110 (4.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          
          
          
[root@node1 ~]# docker exec test1 sh -c "ping 10.0.0.3"
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.698 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=1.034 ms
64 bytes from 10.0.0.3: seq=2 ttl=64 time=1.177 ms
64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.708 ms
64 bytes from 10.0.0.3: seq=4 ttl=64 time=0.651 ms
相關文章
相關標籤/搜索