目錄node
兩臺Ubuntu16.04服務器:ip分別爲192.168.56.160和192.168.56.161。。
Kubernetes版本:1.5.5
Docker版本:1.12.6
etcd版本:2.2.1
flannel版本:0.5.6
其中160服務器既作Kubernetes的master節點,又作node節點;161服務器只作node節點。
master節點上須要部署:kube-apiserver、kube-controller-manager、kube-scheduler、etcd服務。
node節點上部署:kubelet、kube-proxy、docker和flannel服務。linux
Client二進制下載:https://dl.k8s.io/v1.5.5/kubernetes-client-linux-amd64.tar.gz
Server二進制下載:https://dl.k8s.io/v1.5.5/kubernetes-server-linux-amd64.tar.gz
個人服務器是linux,amd64的,若是有其餘環境,能夠前往頁面下載
將可執行文件kubernetes目錄下,server和client目中的kube-apiserver、kube-controller-manager、kubectl、kubelet、kube-proxy、kube-scheduler等都拷貝到/usr/bin/目錄中。nginx
etcd的github release下載都是放在AWS S3上(點這裏)的,我這網絡訪問不了或者很慢,因而找了個國內的下載包(點這裏)。
除此以外,還能夠本身編譯etcd源碼,來獲取etcd的可執行文件。
將etcd的可執行文件etcd和etcdctl拷貝到/usr/bin/目錄。git
flannel和etcd都是coreOS公司的產品,因此flannel的github release下載也是放在AWS S3上。不過幸虧flannel的編譯很簡單,從github上下載,而後直接編譯便可。而後會在flannel的bin或者dist目錄下(版本不一樣可能致使目錄不一樣)生成flannel可執行文件。github
$ git clone -b v0.5.6 https://github.com/coreos/flannel.git $ cd flannel $ ./build
具體的編譯方法可能會不一樣,請參考flannel目錄下的README.md文件。
將可執行文件flanneld拷貝到/usr/bin/目錄。
建立/usr/bin/flannel目錄,並將dist目錄下的mk-docker-opts.sh文件拷貝到/usr/bin/flannel/中。docker
$ sudo mkdir -p /var/lib/etcd/
$ sudo mkdir -p /etc/etcd/ $ sudo vim /etc/etcd/etcd.conf ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.160:2379"
$ sudo vim /lib/systemd/system/etcd.service [Unit] Description=Etcd Server Documentation=https://github.com/coreos/etcd After=network.target [Service] User=root Type=notify EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/bin/etcd Restart=on-failure RestartSec=10s LimitNOFILE=40000 [Install] WantedBy=multi-user.target
$ sudo systemctl daemon-reload $ sudo systemctl enable etcd $ sudo systemctl start etcd
$ sudo systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-03-27 11:19:35 CST; 7s ago ...
再查看端口是否正常開放。vim
$ netstat -apn | grep 2379 tcp6 0 0 :::2379 :::* LISTEN 7211/etcd
$ etcdctl set /coreos.com/network/config '{ "Network": "192.168.4.0/24" }'
若是部署的是etcd集羣,那麼每臺etcd服務器上都須要執行上述步驟。但我這裏只使用了standalone,因此個人etcd服務就搞定了。api
$ sudo mkdir /etc/kubernetes
/etc/kubernetes/config文件中,存儲的是Kubernetes各組件的通用配置信息。服務器
$ sudo vim /etc/kubernetes/config KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_MASTER="--master=http://192.168.56.160:8080"
在Kubernetes的master主機上。網絡
kube-apiserver的專用配置文件爲/etc/kubernetes/apiserver。
$ sudo vim /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.160:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.4.0/24" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # Add your own! KUBE_API_ARGS=""
$ sudo vim /lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service Wants=etcd.service [Service] User=root EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
kube-controller-manager的專用配置文件爲/etc/kubernetes/controller-manager
$ sudo vim /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS=""
$ sudo vim /lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=etcd.service After=kube-apiserver.service Requires=etcd.service Requires=kube-apiserver.service [Service] User=root EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
kube-scheduler的專用配置文件爲/etc/kubernetes/scheduler
$ sudo vim /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS=""
$ sudo vim /lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] User=root EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_MASTER Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
$ sudo systemctl daemon-reload $ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler $ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
Kubernetes node節點也須要配置/etc/kubernetes/config文件,內容與Kubernetes mater節點一致。
$ sudo vim /etc/default/flanneld.conf # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://192.168.56.160:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
其中,FLANNEL_ETCD_PREFIX選項就是剛纔配置的etcd網絡。
$ sudo vim /lib/systemd/system/flanneld.service [Unit] Description=Flanneld Documentation=https://github.com/coreos/flannel After=network.target After=etcd.service Before=docker.service [Service] User=root EnvironmentFile=/etc/default/flanneld.conf ExecStart=/usr/bin/flanneld \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \ $FLANNEL_OPTIONS ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target RequiredBy=docker.service
$ sudo systemctl daemon-reload $ sudo systemctl enable flanneld $ sudo systemctl start flanneld
$ sudo systemctl status flanneld ● flanneld.service - Flanneld Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-03-27 11:59:00 CST; 6min ago ...
經過apt來安裝docker。
$ sudo apt -y install docker.io
修改docker的systemd配置文件。
$ sudo mkdir /lib/systemd/system/docker.service.d $ sudo vim /lib/systemd/system/docker.service.d/flannel.conf [Service] EnvironmentFile=-/run/flannel/docker
重啓docker服務。
$ sudo systemctl daemon-reload $ sudo systemctl restart docker
查看docker是否有了flannel的網絡。
$ sudo ps -ef | grep docker root 11285 1 1 15:14 ? 00:00:01 /usr/bin/dockerd -H fd:// --bip=192.168.4.129/25 --ip-masq=true --mtu=1472 ...
$ sudo mkdir /var/lib/kubelet
kubelet的專用配置文件爲/etc/kubernetes/kubelet
$ sudo vim /etc/kubernetes/kubelet KUBELET_ADDRESS="--address=127.0.0.1" KUBELET_HOSTNAME="--hostname-override=192.168.56.161" KUBELET_API_SERVER="--api-servers=http://192.168.56.160:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true"
$ sudo vim /lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
$ sudo systemctl daemon-reload $ sudo systemctl enable kubelet $ sudo systemctl start kubelet
kube-proxy的專用配置文件爲/etc/kubernetes/proxy
$ sudo vim /etc/kubernetes/proxy # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS=""
$ sudo vim /lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
$ sudo systemctl daemon-reload $ sudo systemctl enable kube-proxy $ sudo systemctl start kube-proxy
執行kubectl get node命令來查看node狀態。都爲Ready狀態時,則說明node節點已經成功鏈接到master,若是不是該狀態,則須要到該節點上,定位下緣由。可經過journalctl -u kubelet.service命令來查看kubelet服務的日誌。
$ kubectl get node NAME STATUS AGE 192.168.56.160 Ready 2d 192.168.56.161 Ready 2d
測試Kubernetes是否成功安裝。
在Kubernetes master上建立一個nginx.yaml,用於建立一個nginx的ReplicationController。
$ vim rc_nginx.yaml apiVersion: v1 kind: ReplicationController metadata: name: nginx labels: name: nginx spec: replicas: 2 selector: name: nginx template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx
執行kubectl create命令建立ReplicationController。該ReplicationController配置中有兩個副本,而且咱們的環境有兩個Kubernetes Node,所以,它應該會在兩個Node上分別運行一個Pod。
注意:這個過程可能會須要很長的時間,它會從網上拉取nginx鏡像,還有pod-infrastructure這個關鍵鏡像。
$ kubectl create -f rc_nginx.yaml
執行kubectl get pod和rc命令來查看pod和rc狀態。剛開始可能會處於containerCreating的狀態,待須要的鏡像下載完成後,就會建立具體的容器。pod狀態應該顯示Running狀態。
$ kubectl get rc NAME DESIRED CURRENT READY AGE nginx 2 2 2 5m $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-1j5x4 1/1 Running 0 5m 192.168.4.130 192.168.56.160 nginx-6bd28 1/1 Running 0 5m 192.168.4.130 192.168.56.161
大功告成!!!