Kubernetes在CentOS7下二進制文件方式安裝、離線安裝

1、下載Kubernetes(簡稱K8S)二進制文件,和 docker 離線包node

下載離線docker安裝包:linux

docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

1)https://github.com/kubernetes/kubernetes/releases 
從上邊的網址中選擇相應的版本,本文以1.9.1版本爲例,從 CHANGELOG頁面 下載二進制文件。git

 

下載頁面

2)組件選擇:選擇Service Binaries中的kubernetes-server-linux-amd64.tar.gz 
該文件已經包含了 K8S所須要的所有組件,無需單獨下載Client等組件。 
這裏寫圖片描述
2、安裝規劃 
1)下載K8S解壓,把每一個組件依次複製到/usr/bin目錄文件下,而後建立systemd服務文見,最後啓動該組件 
3) 本例:以三個節點爲例。具體節點安裝組件以下github

節點IP地址 角色 安裝組件名稱
192.168.137.3 Master(管理節點) etcd、kube-apiserver、kube-controller-manager、kube-scheduler
192.168.137.4 Node1(計算節點) docker 、kubelet、kube-proxy
192.168.137.5 Node2(計算節點) docker 、kubelet、kube-proxy

其中etcd爲K8S數據庫docker

3、Master節點部署 
注意:在CentOS7系統 以二進制文件部署,全部組件都須要4個步驟: 
1)複製對應的二進制文件到/usr/bin目錄下 
2)建立systemd service啓動服務文件 
3)建立service 中對應的配置參數文件 
4)將該應用加入到開機自啓數據庫

1 etcd數據庫安裝 
(1) ectd數據庫安裝 
下載:K8S須要etcd做爲數據庫。以 v3.2.9爲例,下載地址以下: 
https://github.com/coreos/etcd/releases/ 
下載解壓後將etcd、etcdctl二進制文件複製到/usr/bin目錄vim

(2)設置 etcd.service服務文件 
在/etc/systemd/system/目錄裏建立etcd.service,其內容以下: centos

[root@k8s-master]# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd.service
[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd 
[Install]
WantedBy=multi-user.target

 其中WorkingDirectory爲etcd數據庫目錄,須要在etcd**安裝前建立** api

(3)建立配置/etc/etcd/etcd.conf文件 
[root@k8s-master]# cat /etc/etcd/etcd.conftcp

ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.137.3:2379"

  (4)配置開機啓動

#systemctl daemon-reload
#systemctl enable etcd.service
#systemctl start etcd.service

  (5)檢驗etcd是否安裝成功

 
# etcdctl cluster-health
\member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
 

  2 kube-apiserver服務 
(1)複製二進制文件到/usr/bin目錄 
將kube-apiserver、kube-controller-manger、kube-scheduler 三個可執行文件複製到/usr/bin目錄 
(2)新建並編輯/kube-apiserver.service 文件 
[root@k8s-master]#cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver  \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_LOG \
        $KUBE_API_ARGS 
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  3)新建參數配置文件/etc/kubernetes/apiserver 
[root@k8s-master]#cat /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.137.5:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=169.169.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_LOG="--logtostderr=false --log-dir=/home/k8s-t/log/kubernets --v=2"
KUBE_API_ARGS=" "

  

3 kube-controller-manger部署

(1)配置kube-controller-manager systemd 文件服務 
命令內容以下: 
[root@k8s-master]#cat /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  2)配置參數文件 /etc/kubernetes/controller-manager 內容以下: 
[root@k8s-master]#cat /etc/kubernetes/controller-manager \

KUBE_MASTER="--master=http://192.168.137.5:8080"
KUBE_CONTROLLER_MANAGER_ARGS=" "

  

4 kube-scheduler組件部署

(1)配置kube-scheduler systemd服務文件 
[root@k8s-master]#cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  (2)配置/etc/kubernetes/scheduler參數文件 
[root@k8s-master]#cat /etc/kubernetes/scheduler

KUBE_MASTER="--master=http://192.168.137.5:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

  5 將各組件加入開機自啓 
(1)命令以下:

systemctl daemon-reload 
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

  

至此,k8smaster節點安裝完畢

Master一鍵重啓服務:

for i in etcd kube-apiserver kube-controller-manager kube-scheduler docker;do systemctl restart $i;done

====================================

 

Node節點安裝:

Node節點安裝須要複製kubernetes/service/bin的kube-proxy,kubelet到/usr/bin/目錄下,

安裝離線docker安裝包

yum localinstall docler*

  1.    安裝kube-proxy服務

(1)添加/usr/lib/systemd/system/kube-proxy.service文件,內容以下:

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/bin/kube-proxy \

            $KUBE_LOGTOSTDERR \

            $KUBE_LOG_LEVEL \

            $KUBE_MASTER \

            $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

  2)建立/etc/kubernetes目錄

mkdir -p /etc/kubernetes

  3)添加/etc/kubernetes/proxy配置文件

        vim /etc/kubernetes/proxy,內容以下:

KUBE_PROXY_ARGS=""

  (4)添加/etc/kubernetes/config文件

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=http://192.168.1.10:8080"

  (5)啓動kube-proxy服務

systemctl daemon-reload

systemctl start kube-proxy.service

  (6)查看kube-proxy啓動狀態

[root@server2 bin]# netstat -lntp | grep kube-proxy

tcp         0      0 127.0.0.1:10249    0.0.0.0:*        LISTEN      11754/kube-proxy   

tcp6       0      0 :::10256                :::*               LISTEN      11754/kube-proxy 

  2.  安裝kubelet服務

(1)    建立/usr/lib/systemd/system/kubelet.service文件

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

 

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/bin/kubelet $KUBELET_ARGS

Restart=on-failure

KillMode=process

 

[Install]

WantedBy=multi-user.target

  (2)    建立kubelet所需文件路徑

mkdir -p /var/lib/kubelet

(3)    建立kubelet配置文件

vim /etc/kubernetes/kubelet,內容以下:

KUBELET_HOSTNAME="--hostname-override=192.168.1.128"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"

KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

  

(4)    添加/var/lib/kubelet/kubeconfig文件

而後還要添加一個配置文件,由於1.9.0在kubelet裏再也不使用KUBELET_API_SERVER來跟API通訊,而是經過別一個yaml的配置來實現。

vim /var/lib/kubelet/kubeconfig ,內容以下:

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

  cluster:

    server: http://192.168.1.10:8080

contexts:

- context:

    cluster: kubernetes

    user: kubelet

  name: service-account-context

current-context: service-account-context

  

5)啓動kubelet

關閉swap分區:swapoff  -a (否則kubelet啓動報錯)

systemctl daemon-reload

systemctl start kubelet.service

(6)查看kubelet文件狀態

[root@server2 ~]# netstat -lntp | grep kubelet

[root@server2 ~]# netstat -lntp | grep kubelet

tcp        0      0 127.0.0.1:10248     0.0.0.0:*            LISTEN      15410/kubelet      

tcp6       0      0 :::10250                :::*                   LISTEN      15410/kubelet      

tcp6       0      0 :::10255                :::*                   LISTEN      15410/kubelet      

tcp6       0      0 :::4194                 :::*                    LISTEN      15410/kubelet 

  

獲取節點:

kubectl get nodes

相關文章
相關標籤/搜索