文章轉載自:http://www.cnblogs.com/tynia/p/k8s-cluster.htmlhtml
Docker:是一個開源的應用容器引擎,能夠爲應用建立一個輕量級的、可移植的、自給自足的容器。node
Kubernetes:由Google開源的Docker容器集羣管理系統,爲容器化的應用提供資源調度、部署運行、服務發現、擴容縮容等功能。linux
Etcd:由CoreOS開發並維護的一個高可用的鍵值存儲系統,主要用於共享配置和服務發現。git
Flannel:Flannel是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,其目的在於幫助每個使用 Kuberentes 的 CoreOS 主機擁有一個完整的子網。github
主機 | 運行服務 | 角色 |
---|---|---|
192.168.39.40(centos7.1) | etcd docker flannel kube-apiserver kube-controller-manager kube-scheduler |
k8s-master |
192.168.39.42(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion1 |
192.168.39.43(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion2 |
centOS下使用yum安裝etcd、docker、flannel的rpm安裝包,例如:docker
#yum install etcd flannel docker -y
centos
etcd和flannel的安裝比較簡單,沒有依賴關係。docker的安裝由於有依賴關係,須要先安裝docker的依賴包,才能安裝成功。此處不是本文的重點,不作贅述。api
三臺機器上,都必須安裝etcd、docker和flannelbash
下載kubernetes 1.5版本的二進制包,點擊下載網絡
下載完成後執行一下操做,以在 192.168.39.40上爲例:
1
2
3
4
5
6
7
8
|
# tar zxvf kubernetes1.5.tar.gz # 解壓二進制包
# cd kubernetes/server
# tar zxvf kubernetes-server-linux-amd64.tar.gz # 解壓master所需的安裝包
# cd kubernetes/server/bin/
# cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/bin #把master須要的程序,拷貝到/usr/bin下,也能夠設置環境變量達到相同目的
# scp kubelet kube-proxy root@192.168.39.42:~ # 把minion須要的程序,scp發送到minion上
# scp kubelet kube-proxy root@192.168.39.43:~
#
|
注意,kubernetes/server目錄下可能沒有kubernetes-server-linux-amd64.tar.gz,只須要下載這個包,以後在當前目錄下解壓就行了。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
# [member]
ETCD_NAME=
"etcd-2"
ETCD_DATA_DIR=
"/data/etcd/"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380" # 去掉默認的配置
ETCD_LISTEN_PEER_URLS=
"http://0.0.0.0:2380"
#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默認的配置
ETCD_LISTEN_CLIENT_URLS=
"http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS=
"http://192.168.39.42:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
ETCD_INITIAL_CLUSTER=
"etcd-1=http://192.168.39.40:2380,etcd-2=http://192.168.39.42:2380,etcd-3=http://192.168.39.43:2380"
# 此處的含義爲,要配置包含有3臺機器的etcd集羣
ETCD_INITIAL_CLUSTER_STATE=
"new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster1"
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS=
"http://192.168.39.42:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
ETCD_DEBUG="true"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
ETCD_LOG_PACKAGE_LEVELS="etcdserver=WARNING"
|
修改三臺機器中etcd的服務配置: /usr/lib/systemd/system/etcd.service。修改後的文件內容爲:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=
/var/lib/etcd/
EnvironmentFile=-
/etc/etcd/etcd
.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=
/bin/bash
-c
"GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
|
在每臺機器上執行:
1 # systemctl enable etcd.service
2 # systemctl start etcd.service
而後選擇一臺機器,在其上執行:
1 # etcdctl set /cluster "example-k8s"
再選取另一臺機器,執行:
1 # etcdctl get /cluster
若是返回 「example-k8s」,則etcd集羣部署成功。
1
2
3
|
ADD_REGISTRY=
"--add-registry docker.midea.registry.hub:10050"
DOCKER_OPTS=
"--insecure-registry docker.midea.registry.hub:10050"
INSECURE_REGISTRY=
"--insecure-registry docker.midea.registry.hub:10050"
|
以上配置項爲本地 register 的地址和服務端口,在docker的服務啓動項中有用。具體register的搭建,請參考上一篇文章。
1 [Unit]
2 Description=Docker Application Container Engine
3 Documentation=http://docs.docker.com
4 After=network.target
5 Wants=docker-storage-setup.service
6
7 [Service]
8 Type=notify
9 NotifyAccess=all
10 EnvironmentFile=-/etc/sysconfig/docker
11 EnvironmentFile=-/etc/sysconfig/docker-storage
12 EnvironmentFile=-/etc/sysconfig/docker-network
13 Environment=GOTRACEBACK=crash
#注意,在centos中,此處是個坑。docker啓動的時候,systemd是沒法獲取到docker的pid,可能會致使後面的flannel服務沒法啓動,須要加上紅色部分,讓systemd能抓取到 docker的pid
14 ExecStart=/bin/sh -c 'exec -a docker /usr/bin/docker-current daemon \
15 --exec-opt native.cgroupdriver=systemd \
16 $OPTIONS \
17 $DOCKER_STORAGE_OPTIONS \
18 $DOCKER_NETWORK_OPTIONS \
19 $ADD_REGISTRY \
20 $BLOCK_REGISTRY \
21 $INSECURE_REGISTRY \
22 2>&1 | /usr/bin/forward-journald -tag docker'
23 LimitNOFILE=1048576
24 LimitNPROC=1048576
25 LimitCORE=infinity
26 TimeoutStartSec=0
27 MountFlags=slave
28 Restart=on-abnormal
29 StandardOutput=null
30 StandardError=null
31
32 [Install]
33 WantedBy=multi-user.target
分別在每臺機器上執行:
1 # systemctl enable docker.service
2 # systemctl start docker
檢測docker的運行狀態很簡單,執行
1 # docker ps
查看是否能正常列出運行中的容器的各個元數據項便可(此時沒有container運行,只列出各個元數據項):
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1 # Flanneld configuration options
2
3 # etcd url location. Point this to the server where etcd runs
4 FLANNEL_ETCD="http://192.168.39.40:2379"
5
6 # etcd config key. This is the configuration key that flannel queries
7 # For address range assignment
8 FLANNEL_ETCD_KEY="/k8s/network" #這是一個目錄,etcd中的目錄
9
10 # Any additional options that you want to pass
11 FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-endpoints=http://192.168.39.40:2379"
而後執行:
# etcdctl mkdir /k8s/network
# etcdctl set /k8s/network/config '{"Network":"172.100.0.0/16"}'
該命令含義是,指望docker運行的container實例的地址,都在 172.100.0.0/16 網段中
# systemctl enable flanneld.service # systemctl stop docker # 暫時先關閉docker服務,啓動flanneld的時候,會自動拉起docker服務 # systemctl start flanneld.service
# systemctl start docker
命令執行完成,若是沒有差錯的話,就會順利地拉起docker。此處必須先啓動flanneld再啓動docker。
# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472
inet 172.100.28.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:86ff:fe81:6892 prefixlen 64 scopeid 0x20<link>
ether 02:42:86:81:68:92 txqueuelen 0 (Ethernet)
RX packets 29 bytes 2013 (1.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 1994 (1.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.39.40 netmask 255.255.255.0 broadcast 172.20.30.255
inet6 fe80::f816:3eff:fe43:21ac prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:43:21:ac txqueuelen 1000 (Ethernet)
RX packets 13790001 bytes 3573763877 (3.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13919888 bytes 1320674626 (1.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.100.28.0 netmask 255.255.0.0 destination 172.100.28.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 120 (120.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 65311 bytes 5768287 (5.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 65311 bytes 5768287 (5.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
以上描述,就部署好了基本的環境,接下來要部署和啓動kubernetes服務。
#! /bin/sh
# firstly, start etcd
systemctl restart etcd
# secondly, start flanneld
systemctl restart flanneld
# then, start docker
systemctl restart docker
# start the main server of k8s master
nohup kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --cors_allowed_origins=.* --etcd_servers=http://172.20.30.19:4001 --v=1 --logtostderr=false --log_dir=/var/log/k8s/apiserver --service-cluster-ip-range=172.100.0.0/16 &
nohup kube-controller-manager --master=172.20.30.19:8080 --enable-hostpath-provisioner=false --v=1 --logtostderr=false --log_dir=/var/log/k8s/controller-manager &
nohup kube-scheduler --master=172.20.30.19:8080 --v=1 --logtostderr=false --log_dir=/var/log/k8s/scheduler &
而後賦予執行權限:
# chmod u+x start_k8s_master.sh
因爲安裝k8s的操做,已經把kubelet和kube-proxy發送到做爲minion機器上了(咱們已經悄悄地定義好了k8s集羣)
#! /bin/sh
# firstly, start etcd
systemctl restart etcd
# secondly, start flanneld
systemctl restart flanneld
# then, start docker
systemctl restart docker
# start the minion
nohup kubelet --address=0.0.0.0 --port=10250 --v=1 --log_dir=/var/log/k8s/kubelet --hostname_override=172.20.30.21 --api_servers=http://172.20.30.19:8080 --logtostderr=false &
nohup kube-proxy --master=172.20.30.19:8080 --log_dir=/var/log/k8s/proxy --v=1 --logtostderr=false &
而後賦予執行權限:
# chmod u+x start_k8s_minion.sh
發送該腳本到做爲minion的主機上。
# ./start_k8s_master.sh
在做爲minion的主機上,執行:
# ./start_k8s_minion.sh
在master主機上,執行:
# kubectl get node
NAME STATUS AGE
192.168.39.42 Ready 5h
192.168.39.43 Ready 5h
172.20.30.21 Ready 5h
列出以上信息,則表示k8s集羣部署成功。