Kubernetes在CentOS7下二進制文件離線安裝

KubernetesCentOS7下二進制文件離線安裝node

1、下載Kubernetes(簡稱K8S)二進制文件linux

1https://github.com/kubernetes/kubernetes/releases 
從上邊的網址中選擇相應的版本,本文以1.9.1版本爲例,從 CHANGELOG頁面 下載二進制文件到/root目錄git

2)組件選擇:選擇Service Binaries中的kubernetes-server-linux-amd64.tar.gz 
該文件已經包含了 K8S所須要的所有組件,無需單獨下載Client等組件。 
github

2、 安裝思路docker

解壓kubernetes-server-linux-amd64.tar.gz 二進制包,將service/bin/下的可執行二進制文件複製到/usr/bin/下,並設置對應的systemd文件和配置文件。數據庫

3、 節點規劃vim

節點IPapi

角色服務器

安裝組件網絡

192.168.1.10

Master

etcdapiserver,kube-controller-manager,kube-scheduler

192.168.1.128

Node1

Kubelet,kube-proxy,flannel

其中,etcdk8s的數據庫,etcd保存kubernetes中增刪改查等操做。

提早作好/etc/hosts文件綁定

sed -i '$a 192.168.1.10 master' /etc/hosts

sed -i '$a 192.168.1.128 node1' /etc/hosts

4、 部署master節點

1)複製對應的二進制文件到/usr/bin目錄下 
2)建立systemd service啓動服務文件 
3)建立service 中對應的配置參數文件 
4)將該應用加入到開機自啓

0.   離線安裝docker服務

解壓docker.tar.gz文件,而後使用rpm命令忽略依賴關係強制安裝

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

 

啓動docker

systemctl daemon-reload

systemctl start docker

1 . etcd數據庫安裝 
1 ectd數據庫安裝 
下載:K8S須要etcd做爲數據庫。以 v3.2.11爲例,下載地址以下: 
https://github.com/coreos/etcd/releases/ 
解壓,將etcdetcdctl二進制文件複製到/usr/bin目錄

tar zxf etcd-v3.2.11-linux-amd64.tar.gz

cd etcd-v3.2.11-linux-amd64

cp etcd etcdctl /usr/bin/

2)設置 etcd.service服務文件 
/usr/lib/systemd/system/目錄裏建立etcd.service
vim /usr/lib/systemd/system/etcd.service,內容以下:

[Unit]

Description=etcd.service

 

[Service]

Type=notify

TimeoutStartSec=0

Restart=always

WorkingDirectory=/var/lib/etcd

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd

 

[Install]

WantedBy=multi-user.target

 

3)建立紅色字體的路徑:

mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/

4)建立etcd.conf文件:

vim /etc/etcd/etcd.conf

並寫入內容:

ETCD_NAME=ETCD Server

ETCD_DATA_DIR="/var/lib/etcd/"

ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.10:2379"

#上一項配置項中的192.168.1.10k8setcdIP,本案例中etcdmaster在一臺服務器上,具體狀況填寫具體IP

5)啓動etcd

systemctl daemon-reload

systemctl start etcd.service

6)檢查etcd是否啓動成功

[root@server1 ~]# etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://192.168.1.10:2379

cluster is healthy

有此結果表示成功

7etcd默認監控服務器的TCP/2379端口

[root@server1 ~]# netstat -lntp | grep etcd

tcp         0      0 127.0.0.1:2380     0.0.0.0:*             LISTEN      11376/etcd

tcp6       0      0 :::2379                 :::*                    LISTEN      11376/etcd

 

1.   安裝kube-apiserver服務

注意:服務器或者虛擬機網卡必定要配置默認網關,不然就會出現服務不能啓動的問題!!!

1)解壓以前下載好的kubernetes-server-linux-amd64.tar.gz ,將其子目錄server/bin下的kube-apiserver kube-controller-manager kube-scheduler複製到/usr/bin/目錄下

tar zxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin/

cp kube-apiserver kube-controller-manager kube-scheduler /usr/bin/

2)添加/usr/lib/systemd/system/kube-apiserver.service文件

 vim /usr/lib/systemd/system/kube-apiserver.service,內容以下:

[Unit]

Description=Kubernetes API Server

After=etcd.service

Wants=etcd.service

 

[Service]

EnvironmentFile=/etc/kubernetes/apiserver

ExecStart=/usr/bin/kube-apiserver  \

        $KUBE_ETCD_SERVERS \

        $KUBE_API_ADDRESS \

        $KUBE_API_PORT \

        $KUBE_SERVICE_ADDRESSES \

        $KUBE_ADMISSION_CONTROL \

        $KUBE_API_LOG \

        $KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

3)建立kube-apiserver須要的路徑

mkdir -p /etc/kubernetes/

4)創建kube-apiserver的配置文件: /etc/kubernetes/apiserver

vim /etc/kubernetes/apiserver,內容以下:

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet-port=10250"

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.18.0.0/24"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

KUBE_API_ARGS=""

5)啓動kube-apiserver

systemctl daemon-reload

systemctl start kube-apiserver.service

6)查看是否啓動成功

[root@server1 bin]# netstat -lntp | grep kube

tcp6       0      0 :::6443                 :::*           LISTEN      11471/kube-apiserve

tcp6       0      0 :::8080                 :::*           LISTEN      11471/kube-apiserve

 

3. 部署kube-controller-manager

1)添加/usr/lib/systemd/system/kube-controller-manager.service文件

vim /usr/lib/systemd/system/kube-controller-manager.service,內容以下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

 

[Service]

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/bin/kube-controller-manager \

        $KUBE_MASTER \

        $KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

2)添加配置文件controller-manager

vim /etc/kubernetes/controller-manager ,內容以下:

KUBE_MASTER="--master=http://192.168.1.10:8080" KUBE_CONTROLLER_MANAGER_ARGS=" "

3)啓動kube-controller-manager

systemctl daemon-reload

systemctl start kube-controller-manager.service

4)驗證kube-controller-manager是否啓動成功

[root@server1 bin]# netstat -lntp | grep kube-controll

tcp6       0      0 :::10252     :::*    LISTEN      11546/kube-controll

4. 部署kube-scheduler服務

1)編輯/usr/lib/systemd/system/kube-scheduler.service

vim /usr/lib/systemd/system/kube-scheduler.service,內容以下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

 

[Service]

User=root

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/bin/kube-scheduler \

        $KUBE_MASTER \

        $KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

2)編輯kube-scheduler配置文件

vim /etc/kubernetes/scheduler,內容以下:

KUBE_MASTER="--master=http://192.168.1.10:8080"

KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

3)啓動kube-scheduler

systemctl daemon-reload

systemctl start kube-scheduler.service

4)驗證是否啓動

[root@server1 bin]# netstat -lntp | grep kube-schedule

tcp6       0      0 :::10251        :::*         LISTEN      11605/kube-schedule

5. kubernetes/service/bin設置爲默認搜索路徑

sed -i '$a export PATH=$PATH:/root/kubernetes/server/bin/' /etc/profile

source /etc/profile

6.查看幾個節點的狀態:

[root@server1 bin]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok                  

controller-manager   Healthy   ok                  

etcd-0               Healthy   {"health": "true"}  

 

至此,k8smaster節點安裝完畢

Master一鍵重啓服務:

for i in etcd kube-apiserver kube-controller-manager kube-scheduler docker;do systemctl restart $i;done

====================================

 

Node節點安裝:

Node節點安裝須要複製kubernetes/service/binkube-proxykubelet/usr/bin/目錄下,以及flannel二進制文件包

1.   離線安裝docker服務

解壓docker.tar.gz文件,而後使用rpm命令忽略依賴關係強制安裝

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

 

2.    修改docker啓動文件:

vi /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

 

[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

 

[Install]

WantedBy=multi-user.target

 

3.啓動docker

systemctl daemon-reload

systemctl start docker

3.   解壓k8s二進制文件包

tar zxf kubernetes-server-linux-amd64.tar.gz

cd /root/kubernetes/server/bin/

cp kube-proxy kubelet /usr/bin/

4.   安裝kube-proxy服務

1)添加/usr/lib/systemd/system/kube-proxy.service文件,內容以下:

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/bin/kube-proxy \

            $KUBE_LOGTOSTDERR \

            $KUBE_LOG_LEVEL \

            $KUBE_MASTER \

            $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

2)建立/etc/kubernetes目錄

mkdir -p /etc/kubernetes

3)添加/etc/kubernetes/proxy配置文件

vim /etc/kubernetes/proxy,內容以下:

KUBE_PROXY_ARGS=""

4)添加/etc/kubernetes/config文件

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=http://192.168.1.10:8080"

5)啓動kube-proxy服務

systemctl daemon-reload

systemctl start kube-proxy.service

6)查看kube-proxy啓動狀態

[root@server2 bin]# netstat -lntp | grep kube-proxy

tcp         0      0 127.0.0.1:10249    0.0.0.0:*        LISTEN      11754/kube-proxy   

tcp6       0      0 :::10256                :::*               LISTEN      11754/kube-proxy  

5.   安裝kubelet服務

(1)    建立/usr/lib/systemd/system/kubelet.service文件

vim /usr/lib/systemd/system/kubelet.service,內容以下:

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

 

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/bin/kubelet $KUBELET_ARGS

Restart=on-failure

KillMode=process

 

[Install]

WantedBy=multi-user.target

(2)    建立kubelet所需文件路徑

mkdir -p /var/lib/kubelet

(3)    建立kubelet配置文件

vim /etc/kubernetes/kubelet,內容以下:

KUBELET_HOSTNAME="--hostname-override=192.168.1.128"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"

KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

(4)    添加/var/lib/kubelet/kubeconfig文件

而後還要添加一個配置文件,由於1.9.0kubelet裏再也不使用KUBELET_API_SERVER來跟API通訊,而是經過別一個yaml的配置來實現。

vim /var/lib/kubelet/kubeconfig ,內容以下:

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

  cluster:

    server: http://192.168.1.10:8080

contexts:

- context:

    cluster: kubernetes

    user: kubelet

  name: service-account-context

current-context: service-account-context

5)啓動kubelet

關閉swap分區:swapoff  -a (否則kubelet啓動報錯)

systemctl daemon-reload

systemctl start kubelet.service

4)查看kubelet文件狀態

[root@server2 ~]# netstat -lntp | grep kubelet

tcp        0      0 127.0.0.1:10248     0.0.0.0:*            LISTEN      15410/kubelet      

tcp6       0      0 :::10250                :::*                   LISTEN      15410/kubelet      

tcp6       0      0 :::10255                :::*                   LISTEN      15410/kubelet      

tcp6       0      0 :::4194                 :::*                    LISTEN      15410/kubelet      

 

6.   搭建flannel網絡

Flannel能夠使整個集羣的docker容器擁有惟一的內網IP,而且多個node之間的docker0能夠互相訪問

(1) Flannel網絡只須要安裝在node節點上,不須要安裝在etcd節點和master節點上,flannel的下載地址爲:https://github.com/coreos/flannel/releases

(2) 下載以後解壓tar zxf flannel-v0.10.0-linux-amd64.tar.gz

將二進制文件flanneldmk-docker-opts.sh拷貝到/usr/bin/下,即安裝完成flannel

cp flanneld mk-docker-opts.sh /usr/bin/

(3) 編寫flannelsystemd文件,便於啓動

vi /usr/lib/systemd/system/flanneld.service,內容以下:
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
    
[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

4)編寫flannel配置文件,路徑對應上文的/etc/sysconfig/flanneld

vim /etc/sysconfig/flanneld,內容以下:

# flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD="http://192.168.1.10:2379"

 

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_KEY="/atomic.io/network/"

(5)    vi /usr/bin/flanneld-start

#!/bin/sh
exec /usr/bin/flanneld \
        -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
        -etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
        "$@"

 

賦執行權限

chmod +x /usr/bin/flanneld-start

(6)    etcd節點定義一個flannel網絡

etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'

(7)    中止docker和關閉docker0

由於flannel將覆蓋docker0網絡,因此最好在開啓flannel以前關閉docker0網卡和docker

systemctl stop docker

關閉docker服務後,kubelet也會關閉,master會顯示node節點不可用,這是正常現象,等flannel網絡設置完畢以後再開啓kubeletdocker便可。

(8)    啓動flannel服務

systemctl daemon-reload

systemctl start flanneld

(9)    設置docker0網橋的IP地址

mkdir -p /usr/lib/systemd/system/docker.service.d
cd /usr/lib/systemd/system/docker.service.d
 
mk-docker-opts.sh -i
 
source /run/flannel/subnet.env
 
vi /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker

(10) 重啓dockerkubelet服務

systemctl restart docker

systemctl restart kubelet

(11) 確認docker0地址和flannel位於同一網段

ifconfig

 

到此完成了flannel覆蓋網絡的設置。

各個node之間的docker0就能夠互相訪問了。

 

Etcd數據庫操做

刪除一個鍵值:

舉例:etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'不當心寫錯了,能夠刪除值,從新賦值:

etcdctl rm /atomic.io/network/config

 

而後從新賦值就能夠了,而後須要去node上刪除/run/flannel/subnet.env文件,重啓flanneld便可獲取新的IP網段。

相關文章
相關標籤/搜索