無網絡centos7中部署kubernetes

本文提供的kubernetes1.1實際爲kubernetes0.8,最新kubernetes部署方式見下一篇文章:centos下kubernetes+flannel部署。php

1、部署環境信息:html

1)master主機node

IP:10.11.150.74;主機名:tc_150_74;DNS配置中的主機名:tc-150-74;內核:Linux version 3.10.0-229.11.1.el7.x86_64linux

2)node主機git

IP:10.11.150.73;主機名:tc_150_73;DNS配置中的主機名:tc-150-73;內核:Linux version 3.10.0-123.el7.x86_64github

部署過程主要參考kubernetes的官網(戳這裏)。docker

2、準備工做:json

1)各個rpm包下載(百度網盤備份):cadvisor-0.14.0docker-1.7.1etcd-0.4.6kubernetes-client-1.1.0kubernetes-master-1.1.0kubernetes-node-1.1.0etcdctl。主要下載源爲fedora的鏡像源(戳這裏戳這裏)。centos

2)在73和74機上安裝docker。master上安裝docker非必需,但docker是安裝kubernetes-node的依賴項,因此若是須要安裝kubernetes-node則必須安裝docker。官網上建議使用docker-1.6.2和docker-1.7.1,在實際安裝過程當中發現,非1.7.1版本的docker下安裝kubernetes-node會出現衝突錯誤。如安裝docker-1.8.2以後再安裝kubernetes-node時出現以下錯誤:api

錯誤:docker-engine conflicts with docker-1.8.2-7.el7.centos.x86_64

安裝方式爲:

sudo yum localinstall docker-1.7.1-115.el7.x86_64.rpm -y 

若是安裝docker時出現相似以下錯誤:

錯誤:軟件包:docker-1.7.1-108.el7.centos.x86_64 (/docker-1.7.1-108.el7.centos.x86_64)
          須要:docker-selinux >= 1.7.1-108.el7.centos
          可用: docker-selinux-1.7.1-108.el7.x86_64 (7ASU1-updates)
              docker-selinux = 1.7.1-108.el7

則到須要先下載安裝對應版本的docker-selinux(下載戳這裏)。

3)安裝etcd

sudo yum localinstall etcd-0.4.6-7.el7.centos.x86_64.rpm -y

etcd只要在master主機上安裝便可。

 4)安裝cadvisor(非必須)

sudo yum localinstall cadvisor-0.14.0-1.el7.x86_64.rpm -y

只須要在node主機上安裝便可。

5)安裝kubernetes

須要先安裝client再安裝master和node。

sudo yum localinstall kubernetes-client-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y
sudo yum localinstall kubernetes-master-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y
sudo yum localinstall kubernetes-node-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y

3、配置與啓動服務

1)etcd的啓動

sudo etcd -peer-addr 10.11.150.74:7001 -addr 10.11.150.74:4001 -peer-bind-addr 0.0.0.0:7001 -bind-addr 0.0.0.0:4001 &

首先必須確保etcd啓動成功且能被正常訪問,不然啓動 kube-apiserver時會出現以下錯誤:

I1111 13:25:42.451759    7611 plugins.go:69] No cloud provider specified.
I1111 13:25:42.452027    7611 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I1111 13:25:42.453609    7611 master.go:295] Will report 10.11.150.74 as public IP address.
[restful] 2015/11/11 13:25:42 log.go:30: [restful/swagger] listing is available at https://10.11.150.74:6443/swaggerapi/
[restful] 2015/11/11 13:25:42 log.go:30: [restful/swagger] https://10.11.150.74:6443/swaggerui/ is mapped to folder /swagger-ui/
F1111 13:25:52.516153    7611 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: no kind "RangeAllocation" is registered for version "v1beta3"

在73和74機上使用etcdctl查看etcd的運行狀態,執行以下命令,若是正常則返回已建立的表結構。

./etcdctl --peers="http://10.11.150.74:7001" ls

2)hosts 配置

 編輯73和74機的 /etc/hosts 文件,在其中添加以下語句:

10.11.150.73 tc-150-73
10.11.150.74 tc-150-74

注意host的取名要規範,如"tc_150_73"這種方式是不正確的,取這個名字的話在後面經過node.json建立節點時會報以下錯誤:

The Node "tc_150_73" is invalid:metadata.name: invalid value 'tc_150_73': must be a DNS subdomain (at most 253 characters, matching regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*): e.g. "example.com"

3)config文件配置

編輯73和74機上的 /etc/kubernetes/config 文件,內容爲:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://tc-150-74:8080"

4)關閉防火牆

systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

5)配置apiserver

編輯74主機上的 /etc/kubernetes/apiserver 文件,內容以下:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--insecure-bind-address=0.0.0.0 --insecure-port=8080"

6)配置kubelet

編輯73主機上的 /etc/kubernetes/kubelet 文件,內容以下:

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=tc-150-73"

# location of the api-server
KUBELET_API_SERVER="--api_servers=http://tc-150-74:8080"

# Add your own!
KUBELET_ARGS="--pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest"

此處注意 KUBELET_ARGS 中 --pod-infra-container-image 的設置,幫助文檔中的說明爲:

--pod-infra-container-image="gcr.io/google_containers/pause:0.8.0": The image whose network/ipc namespaces containers in each pod will use.

即須要到 google 提供的一個docker鏡像站下載每一個pods建立時須要運行的pause基礎鏡像,因爲GreatWall的存在,可將該基礎鏡像pull下來放到本身的一個registry中再進行下載(本文中放到了76機上的私有registry中)。

 7)啓動master服務

在74機上建立以下腳本,即以服務形式啓動 kube-apiserver,kube-controller-manager 和 kube-scheduler:

#!/bin/bash

for SERVICES in kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES -l
done

運行腳本,正常啓動會顯示以下內容:

   kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled)
   Active: active (running) since 三 2015-11-11 14:21:05 CST; 127ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 21719 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─21719 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 --address=0.0.0.0 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=10.254.0.0/16 --insecure-bind-address=0.0.0.0 --insecure-port=8080

11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: I1111 14:21:04.506280   21719 plugins.go:69] No cloud provider specified.
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: I1111 14:21:04.506528   21719 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: I1111 14:21:04.508417   21719 master.go:295] Will report 10.11.150.74 as public IP address.
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: [restful] 2015/11/11 14:21:04 log.go:30: [restful/swagger] listing is available at https://10.11.150.74:6443/swaggerapi/
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: [restful] 2015/11/11 14:21:04 log.go:30: [restful/swagger] https://10.11.150.74:6443/swaggerui/ is mapped to folder /swagger-ui/
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: I1111 14:21:04.566107   21719 server.go:441] Serving securely on 0.0.0.0:6443
11月 11 14:21:04 tc_150_74 kube-apiserver[21719]: I1111 14:21:04.566134   21719 server.go:483] Serving insecurely on 0.0.0.0:8080
11月 11 14:21:05 tc_150_74 kube-apiserver[21719]: I1111 14:21:05.347052   21719 server.go:456] Using self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
11月 11 14:21:05 tc_150_74 systemd[1]: Started Kubernetes API Server.
kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled)
   Active: active (running) since 三 2015-11-11 14:21:05 CST; 140ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 21752 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─21752 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://tc_150_74:8080

11月 11 14:21:05 tc_150_74 systemd[1]: Started Kubernetes Controller Manager.
11月 11 14:21:05 tc_150_74 kube-controller-manager[21752]: I1111 14:21:05.516790   21752 plugins.go:69] No cloud provider specified.
11月 11 14:21:05 tc_150_74 kube-controller-manager[21752]: I1111 14:21:05.516928   21752 nodecontroller.go:114] Sending events to api server.
11月 11 14:21:05 tc_150_74 kube-controller-manager[21752]: E1111 14:21:05.517089   21752 controllermanager.go:201] Failed to start service controller: ServiceController should not be run without a cloudprovider.
kube-scheduler.service - Kubernetes Scheduler Plugin
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
   Active: active (running) since 三 2015-11-11 14:21:05 CST; 140ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 21784 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─21784 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://tc_150_74:8080

11月 11 14:21:05 tc_150_74 systemd[1]: Started Kubernetes Scheduler Plugin.
View Code

在74機上建立node.json文件,內容以下:

{
    "apiVersion": "v1",
    "kind": "Node",
    "metadata": {
        "name": "tc-150-73",
        "labels":{ "name": "node-label" }
    },
    "spec": {
        "externalID": "tc-150-73"
    }
}

使用以下命令建立臨時的節點信息(注意只是記錄,不是真正的節點):

kubectl create -f ./node.json

使用 kubectl get nodes 查看會發現已經有以下記錄了,說明 apiserver是正常工做的。

NAME           LABELS                    STATUS
tc-150-73      name=node-label           Unknown

8)啓動node服務

在73機上建立以下腳本,即以服務形式啓動 proxy,kubelet和docker:

#!/bin/bash

for SERVICES in kube-proxy kubelet docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

運行腳本,成功啓動則顯示以下內容:

kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled)
   Active: active (running) since 四 2015-11-12 13:30:01 CST; 85ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 20164 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─20164 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http...

11月 12 13:30:01 tc_150_73 systemd[1]: Started Kubernetes Kube-Proxy Server.
kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled)
   Active: active (running) since 四 2015-11-12 13:30:01 CST; 124ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 20207 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─20207 /usr/bin/kubelet --logtostderr=true --v=0 --api_servers=ht...

11月 12 13:30:01 tc_150_73 systemd[1]: Started Kubernetes Kubelet Server.
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.478089   20207 ma..."
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.479634   20207 fs...r
11月 12 13:30:01 tc_150_73 kubelet[20207]: f48ee5c424bbed5 major:253 minor:...]
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.489689   20207 ma...4
11月 12 13:30:01 tc_150_73 kubelet[20207]: Scheduler:none} 253:15:{Name:dm-... 
11月 12 13:30:01 tc_150_73 kubelet[20207]: :32768 Type:Instruction Level:1} ...
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.529029   20207 ma...}
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.529852   20207 pl....
Hint: Some lines were ellipsized, use -l to show in full.
docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
   Active: active (running) since 四 2015-11-12 13:30:03 CST; 81ms ago
     Docs: http://docs.docker.com
 Main PID: 20264 (docker)
   CGroup: /system.slice/docker.service
           └─20264 /usr/bin/docker -d --selinux-enabled --add-registry regist...

11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4491017..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4532426..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4562807..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6940850..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6944571..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6944879..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6945164...1
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6953038..."
11月 12 13:30:03 tc_150_73 systemd[1]: Started Docker Application Container....
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.7360503..."
Hint: Some lines were ellipsized, use -l to show in full.
View Code

此時在74機上運行 kubectl get nodes,若是前面配置一切正常的話node的狀態會變成Ready:

NAME        LABELS            STATUS
tc-150-73   name=node-label   Ready
相關文章
相關標籤/搜索