更合理的部署方式參見《Centos下Kubernetes+Flannel部署(新)》html
1、準備工做node
1. 三臺centos主機git
k8s(即kubernetes,下同)master: 10.16.42.200github
k8s node1: 10.16.42.198docker
k8s node2: 10.16.42.199json
2. 程序下載(百度網盤):k8s-1.0.1(也可使用測試版k8s-1.1.2.beta),Docker-1.8.2,cadvisor-0.14.0,etcd-2.2.1,flannel-0.5.5centos
2、ETCD集羣部署api
分別向三臺主機的/etc/hosts文件中追加以下設置:bash
10.16.42.198 bx-42-198 10.16.42.199 bx-42-199 10.16.42.200 bx-42-200
在三臺主機中分別解壓etcd.tar,將其中的 etcd 和 etcdctl 複製到你的工做目錄(如 /openxxs/bin,下同)。網絡
在200的/openxxs/bin目錄下建立腳本start_etcd.sh並執行:
1 #!/bin/bash 2 3 etcd_token=kb2-etcd-cluster 4 local_name=kbetcd0 5 local_ip=10.16.42.200 6 local_peer_port=4010 7 local_client_port1=4011 8 local_client_port2=4012 9 node1_name=kbetcd1 10 node1_ip=10.16.42.198 11 node1_port=4010 12 node2_name=kbetcd2 13 node2_ip=10.16.42.199 14 node2_port=4010 15 16 17 ./etcd -name $local_name \ 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \ 19 -listen-peer-urls http://0.0.0.0:$local_peer_port \ 20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \ 21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \ 22 -initial-cluster-token $etcd_token \ 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \ 24 -initial-cluster-state new
在198的/openxxs/bin目錄下建立腳本start_etcd.sh並執行:
1 #!/bin/bash 2 3 etcd_token=kb2-etcd-cluster 4 local_name=kbetcd1 5 local_ip=10.16.42.198 6 local_peer_port=4010 7 local_client_port1=4011 8 local_client_port2=4012 9 node1_name=kbetcd0 10 node1_ip=10.16.42.200 11 node1_port=4010 12 node2_name=kbetcd2 13 node2_ip=10.16.42.199 14 node2_port=4010 15 16 17 ./etcd -name $local_name \ 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \ 19 -listen-peer-urls http://0.0.0.0:$local_peer_port \ 20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \ 21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \ 22 -initial-cluster-token $etcd_token \ 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \ 24 -initial-cluster-state new &
在199的/openxxs/bin目錄下建立腳本start_etcd.sh並執行:
1 #!/bin/bash 2 3 etcd_token=kb2-etcd-cluster 4 local_name=kbetcd2 5 local_ip=10.16.42.199 6 local_peer_port=4010 7 local_client_port1=4011 8 local_client_port2=4012 9 node1_name=kbetcd1 10 node1_ip=10.16.42.198 11 node1_port=4010 12 node2_name=kbetcd0 13 node2_ip=10.16.42.200 14 node2_port=4010 15 16 17 ./etcd -name $local_name \ 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \ 19 -listen-peer-urls http://0.0.0.0:$local_peer_port \ 20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \ 21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \ 22 -initial-cluster-token $etcd_token \ 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \ 24 -initial-cluster-state new &
在各主機上執行相似以下命令查看etcd是否正常運行:
1 curl -L http://10.16.42.198:4012/version 2 curl -L http://10.16.42.199:4012/version 3 curl -L http://10.16.42.200:4012/version
若是返回值均爲 「{"etcdserver":"2.2.1","etcdcluster":"2.2.0"}」 說明ETCD部署成功。
3、Docker安裝與設置
yum install docker-engine-1.8.2-1.el7.centos.x86_64.rpm -y
各個主機上安裝成功後,修改 /etc/sysconfig/docker 文件爲:
OPTIONS="-g /opt/scs/docker --insecure-registry 10.11.150.76:5000"
其中的--insecure-registry表示使用本身私有的鏡像倉庫。
修改 /lib/systemd/system/docker.service 內容爲:
1 [Unit] 2 Description=Docker Application Container Engine 3 Documentation=https://docs.docker.com 4 After=network.target docker.socket 5 Requires=docker.socket 6 7 [Service] 8 Type=notify 9 EnvironmentFile=/etc/sysconfig/docker 10 ExecStart=/usr/bin/docker -d $OPTIONS \ 11 $DOCKER_STORAGE_OPTIONS \ 12 $DOCKER_NETWORK_OPTIONS \ 13 $ADD_REGISTRY \ 14 $BLOCK_REGISTRY \ 15 $INSECURE_REGISTRY 16 #ExecStart=/usr/bin/docker daemon -H fd:// 17 MountFlags=slave 18 LimitNOFILE=1048576 19 LimitNPROC=1048576 20 LimitCORE=infinity 21 22 [Install] 23 WantedBy=multi-user.target
注意,k8s會託管你的docker,若是以前在主機上用docker建立或運行了一些容器,注意數據的備份。
4、Flannel安裝與設置
yum localinstall flannel-0.5.5-1.fc24.x86_64.rpm
各個主機上安裝成功後,修改 /etc/sysconfig/flanneld 內容爲:
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://10.16.42.200:4012" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
即肯定flanneld使用的etcd服務地址和etcd中存儲flannel相關設置的key值。
修改 /lib/systemd/system/flanneld.service 內容爲:
1 [Unit] 2 Description=Flanneld overlay address etcd agent 3 After=network.target 4 After=network-online.target 5 Wants=network-online.target 6 After=etcd.service 7 Before=docker.service 8 9 [Service] 10 Type=notify 11 EnvironmentFile=/etc/sysconfig/flanneld 12 EnvironmentFile=-/etc/sysconfig/docker-network 13 ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS 14 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker 15 Restart=on-failure 16 17 [Install] 18 WantedBy=multi-user.target 19 RequiredBy=docker.service
5、具體部署過程
1. 啓動ETCD
ETCD是k8s正常運行的基礎,所以按照第二步中的方式運行腳本並測試部署成功後再進行其它程序的啓動。
2. 啓動Flannel
啓動Flannel前中止docker、iptables和firewall服務的運行:
systemctl stop docker systemctl disable iptables-services firewalld systemctl stop iptables-services firewalld
使用 ps aux | grep docker 查看 docker 是否是以 daemon 的形式運行着。若是是,kill 掉該進程。
使用 ifconfig 查看是否存在 docker0 及 flannel 相關的網橋。若是有,使用 ip link delete docker0 刪除。
以上準備工做作好後,還須要往ETCD中寫入flannel的相關配置,即在 200 主機上建立 flannel-config.json 文件,內容爲:
{ "Network": "172.16.0.0/12", "SubnetLen": 16, "Backend": { "Type": "vxlan", "VNI": 1 } }
即規定了flannel可用的子網段和網絡包封裝方式,而後將其寫入ETCD中(注意這裏的key值和Flannel啓動的FLANNEL_ETCD_KEY參數值保持一致):
./etcdctl --peers=http://10.16.42.200:4012 set /coreos.com/network/config < flannel-config.json
而後在各個主機上啓動flannel:
systemctl start flanneld
3. 啓動docker
在各個主機上啓動docker服務:
systemctl start docker
而後使用ifconfig查看docker0和flannel.1的IP網段,若是flannel.1的網段包含了docker0的網段,則說明flannel的配置和啓動是沒問題的。
4. 啓動master上的k8s服務
./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 & ./kube-controller-manager --logtostderr=true --v=0 --master=http://bx-42-200:8080 --cloud-provider="" & ./kube-scheduler --logtostderr=true --v=0 --master=http://bx-42-200:8080 &
注意在啓動 kube-controller-manager 時可能會報以下錯誤:
plugins.go:71] No cloud provider specified. controllermanager.go:290] Failed to start service controller: ServiceController should not be run without a cloudprovider.
這是由 --cloud-provider 的值爲空或未指定該參數形成的,但對總體的k8s運行無太大影響,因此能夠忽略(該bug參見github討論:戳這裏)。
5. 啓動node上的k8s服務
./kube-proxy --logtostderr=true --v=0 --master=http://bx-42-200:8080 --proxy-mode=iptables & ./kubelet --logtostderr=true --v=0 --api_servers=http://bx-42-200:8080 --address=0.0.0.0 --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &
注意這裏的 --pod-infra-container-image 參數設置。每一個pod啓動時都要先啓動一個/kubernetes/pause:latest容器來進行一些基本的初始化工做,而默認該鏡像的下載地址爲 gcr.io/google_containers/pause:0.8.0 。因爲GWF的存在可能會鏈接不上該資源,因此能夠將該鏡像下載下來以後再push到本身的docker本地倉庫中,啓動 kubelet 時從本地倉庫中讀取便可。
還有注意 --proxy-mode=iptables 參數是在k8s 1.1實驗版本中才有的,其含義的官方解釋以下:
--proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
若是不支持 --proxy-mode=iptables 則會報相似以下錯誤:
W1119 21:00:12.187930 5595 server.go:200] Failed to start in resource-only container "/kube-proxy": write /sys/fs/cgroup/memory/kube-proxy/memory.swappiness: invalid argument E1119 21:00:12.198572 5595 proxier.go:197] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn't load target `KUBE-PORTALS-HOST':No such file or directory Try `iptables -h' or 'iptables --help' for more information. E1119 21:00:12.200286 5595 proxier.go:201] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn't load target `KUBE-PORTALS-CONTAINER':No such file or directory Try `iptables -h' or 'iptables --help' for more information. E1119 21:00:12.202162 5595 proxier.go:207] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn't load target `KUBE-NODEPORT-HOST':No such file or directory Try `iptables -h' or 'iptables --help' for more information. E1119 21:00:12.204058 5595 proxier.go:211] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn't load target `KUBE-NODEPORT-CONTAINER':No such file or directory Try `iptables -h' or 'iptables --help' for more information. E1119 21:00:12.205848 5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-PORTALS-CONTAINER": exit status 1: iptables: No chain/target/match by that name. E1119 21:00:12.207467 5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-PORTALS-HOST": exit status 1: iptables: No chain/target/match by that name. E1119 21:00:12.209000 5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-NODEPORT-HOST": exit status 1: iptables: No chain/target/match by that name. E1119 21:00:12.210580 5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-NODEPORT-CONTAINER": exit status 1: iptables: No chain/target/match by that name.
6、測試
以上部署完成以後,在任意主機上執行如下命令查看結點狀態:
./kubectl -s 10.16.42.200:8080 get nodes
若是返回相似以下內容則說明apiserver是正常服務的:
NAME LABELS STATUS AGE bx-42-198 kubernetes.io/hostname=bx-42-198 Ready 1d bx-42-199 kubernetes.io/hostname=bx-42-199 Ready 1d
建立 test.yaml 文件,內容以下:
1 apiVersion: v1 2 kind: ReplicationController 3 metadata: 4 name: test-1 5 spec: 6 replicas: 1 7 template: 8 metadata: 9 labels: 10 app: test-1 11 spec: 12 containers: 13 - name: iperf 14 image: 10.11.150.76:5000/openxxs/iperf:1.2 15 nodeSelector: 16 kubernetes.io/hostname: bx-42-198 17 --- 18 apiVersion: v1 19 kind: ReplicationController 20 metadata: 21 name: test-2 22 spec: 23 replicas: 1 24 template: 25 metadata: 26 labels: 27 app: test-2 28 spec: 29 containers: 30 - name: iperf 31 image: 10.11.150.76:5000/openxxs/iperf:1.2 32 nodeSelector: 33 kubernetes.io/hostname: bx-42-198 34 --- 35 apiVersion: v1 36 kind: ReplicationController 37 metadata: 38 name: test-3 39 spec: 40 replicas: 1 41 template: 42 metadata: 43 labels: 44 app: test-3 45 spec: 46 containers: 47 - name: iperf 48 image: 10.11.150.76:5000/openxxs/iperf:1.2 49 nodeSelector: 50 kubernetes.io/hostname: bx-42-199 51 --- 52 apiVersion: v1 53 kind: ReplicationController 54 metadata: 55 name: test-4 56 spec: 57 replicas: 1 58 template: 59 metadata: 60 labels: 61 app: test-4 62 spec: 63 containers: 64 - name: iperf 65 image: 10.11.150.76:5000/openxxs/iperf:1.2 66 nodeSelector: 67 kubernetes.io/hostname: bx-42-199
表示在198上建立 test-1 和 test-2 兩個pod,在199上建立 test-3 和 test-4 兩個pod。注意其中的 image 等參數根據實際狀況進行修改。
經過test.yaml建立pods:
./kubectl -s 10.16.42.200:8080 create -f test.yaml
經過 get pods 查看pods的建立和運行狀態:
./kubectl -s 10.16.42.200:8080 get pods
若是建立成功並正常運行則會顯示相似以下內容:
NAME READY STATUS RESTARTS AGE test-1-a9dn3 1/1 Running 0 1d test-2-64urt 1/1 Running 0 1d test-3-edt2l 1/1 Running 0 1d test-4-l6egg 1/1 Running 0 1d
在198上經過 docker exec 進入test-2對應的容器,經過 ip addr show 查看IP;一樣在199上進入test-4對應的容器查看IP。而後在 test-2和 test-4 容器中互相ping 對方的IP,若是ping通了,說明flannel也正常工做了。