kubernetes安裝部署-day01

1、基礎環境的準備:

1.一、安裝docker:

docker的官網是:https://www.docker.com/node

1.1.一、rpm包安裝:

官方下載地址:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/linux

 此處選擇18.03做爲樣例:git

1.1.二、執行安裝:

[root@k8s-master1 k8s]# yum install -y docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm

1.1.三、驗證docker版本:

[root@k8s-master1 ~]# docker -v
Docker version 18.03.1-ce, build 9ee9f40

1.1.四、啓動docker服務:

啓動docker:systemctl start docker
設置開機自啓動:systemctl enable docker
查看docker的運行狀態:systemctl status docker

2、ETCD集羣部署:

    etcd由CoreOS開發,是基於Raft算法的key-value分佈式存儲系統。etcd經常使用保存分佈式系統的核心數據以保證數據一致性和高可用性。kubernetes就是使用的etcd存儲全部運行數據,如集羣IP劃分、master和node狀態、pod狀態、服務發現數據等。github

    etcd的四個核心特色是:簡單:基於HTTP+JSON的API讓你用curl命令就能夠輕鬆使用。安全:可選SSL客戶認證機制。快速:每一個實例每秒支持一千次寫操做。可信:使用Raft算法充分實現了分佈式。算法

    三個節點,經測試,三臺服務器的集羣最多能夠有一臺宕機,若是兩臺宕機的話會致使整個etcd集羣不可用。docker

2.一、各節點安裝部署etcd服務:

下載地址:https://github.com/etcd-io/etcd/releasesjson

2.1.一、節點部署:

將安裝包上傳到服務器的/usr/local/src目錄下,而且解壓二進制安裝包。bootstrap

2.1.二、複製可執行文件到/usr/bin/目錄下:

etcd是啓動server端的命令,etcdctl是客戶端命令操做工具。vim

建立kubernetes相應的目錄:centos

[root@k8s-etcd1 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
[root@k8s-etcd1 ~]# cp etcdctl etcd /opt/kubernetes/bin/
[root@k8s-etcd1 ~]# ln -sv /opt/kubernetes/bin/* /usr/bin/

2.1.三、建立etcd服務啓動腳本:

[root@k8s-etcd1 ~]# vim /etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd #數據保存目錄

EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf #配置文件保存路徑

# set GOMAXPROCS to number of processors

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd" #服務端etcd命令路徑

Type=notify

[Install]

WantedBy=multi-user.target

 

2.1.四、建立etcd用戶並受權目錄權限:

[root@k8s-etcd1  ~]# mkdir /var/lib/etcd

[root@k8s-etcd1  ~]# useradd  etcd -s /sbin/nologin

[root@k8s-etcd1  ~]# chown  etcd.etcd /var/lib/etcd/

[root@k8s-etcd1  ~]# mkdir  /etc/etcd

2.1.五、編輯主配置文件etcd.conf:

使用https的鏈接方式

[root@k8s-etcd1 ~]# cat /opt/kubernetes/cfg/etcd.conf

#[member]

ETCD_NAME="etcd-node1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_SNAPSHOT_COUNTER="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

ETCD_LISTEN_PEER_URLS="https://10.172.160.250:2380"

ETCD_LISTEN_CLIENT_URLS="https://10.172.160.250:2379,https://127.0.0.1:2379"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.172.160.250:2380"

# if you use different ETCD_NAME (e.g. test),

# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

ETCD_INITIAL_CLUSTER="etcd-node1=https://10.172.160.250:2380,etcd-node2=https://10.51.50.234:2380,etcd-node3=https://10.170.185.97:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://10.172.160.250:2379"

#[security]

CLIENT_CERT_AUTH="true"

ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

PEER_CLIENT_CERT_AUTH="true"

ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

[root@k8s-etcd1 ~]#

#備註

ETCD_DATA_DIR  #當前節點的本地數據保存目錄

ETCD_LISTEN_PEER_URLS #集羣之間通信端口,寫本身的IP

ETCD_LISTEN_CLIENT_URLS #客戶端訪問地址,寫本身的IP

ETCD_NAME  #當前節點名稱,同一個集羣內的各節點不能相同

ETCD_INITIAL_ADVERTISE_PEER_URLS #通告本身的集羣端口,靜態發現和動態發現,本身IP

ETCD_ADVERTISE_CLIENT_URLS  #通告本身的客戶端端口,本身IP地址

ETCD_INITIAL_CLUSTER #」節點1名稱=http://IP:端口,節點2名稱=http://IP:端口,節點3名稱=http://IP:端口」,寫集羣全部的節點信息

ETCD_INITIAL_CLUSTER_TOKEN #建立集羣使用的token,一個集羣內的節點保持一致

ETCD_INITIAL_CLUSTER_STATE #新建集羣的時候的值爲new,若是是已經存在的集羣爲existing。

#若是conf 裏沒有配置ectdata 參數,默認會在/var/lib 建立*.etcd 文件

#若是etcd集羣已經建立成功, 初始狀態能夠修改成existing。 

2.1.六、啓動etcd並驗證集羣服務

啓動etcd,設置爲開機啓動,並查看狀態:

systemctl start etcd && systemctl enable etcd && systemctl status etcd

列出全部etcd節點:

[root@k8s-etcd1 ~]# etcdctl --endpoints=https://10.172.160.250:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem member list

查看集羣狀態:

[root@k8s-etcd1 ~]# etcdctl --endpoints=https://10.172.160.250:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

2.1.7因爲特殊緣由,個人環境須要修改爲HTTP模式:

修改配置文件,把設置https的地方修改爲http同時刪除證書的配置,刪除/var/lib/etcd/下的文件,重啓操做便可

查看測試是否正常跳轉,停到250上的服務。停到250上的服務以後跳轉到234節點,說明服務正常。

3、kubernetes集羣部署之master部署:

3.一、kubernetes apiserver部署:

 Kube-apiserver提供了http rest接口的訪問方式,是kubernetes裏全部資源的增刪改查操做的惟一入口,kube-apiserver是無狀態的,即三臺kube-apiserver無需進行主備選取,由於apiserver產生的數據是直接保存在etcd上的,所以api能夠直接經過負載進行調用。

3.二、下載kubernetes相應的軟件包:

 軟件包放在/usr/local/src/下

[root@k8s-master1 src]# pwd
/usr/local/src
[root@k8s-master1 src]# tar xvf kubernetes-1.11.0-client-linux-amd64.tar.gz
[root@k8s-master1 src]# tar xvf  kubernetes-1.11.0-node-linux-amd64.tar.gz
[root@k8s-master1 src]# tar xvf  kubernetes-1.11.0-server-linux-amd64.tar.gz
[root@k8s-master1 src]# tar xvf kubernetes-1.11.0.tar.gz

複製命令到相應的目錄下

[root@k8s-master1 src]# cp kubernetes/server/bin/kube-apiserver /opt/kubernetes/bin/
[root@k8s-master1 src]# cp kubernetes/server/bin/kube-scheduler /usr/bin/
[root@k8s-master1 src]# cp kubernetes/server/bin/kube-controller-manager  /usr/bin/

3.三、準備工做

3.3.一、建立生成CSR文件的json文件:

[root@k8s-master1 src]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.1.0.1",
    "192.168.100.101",
    "192.168.100.102",
    "192.168.100.112",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

生成相應的證書文件

[root@k8s-master1 src]#  cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

將證書複製到各服務器

[root@k8s-master1:/usr/local/src/ssl/master# cp kubernetes*.pem /opt/kubernetes/ssl/
[root@k8s-master1 src]# bash /root/ssh.sh

注:ssh.sh是一個同步腳本,同步服務器間文件

3.3.二、建立apiserver 使用的客戶端 token 文件:

[root@k8s-master1:/usr/local/src/ssl/master]#  head -c 16 /dev/urandom | od -An -t x | tr -d ' '
9077bdc74eaffb83f672fe4c530af0d6
[root@k8s-master1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv #各master服務器
7b7918630245ac1b5221b26be11e6b85,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

3.3.三、配置認證用戶密碼:

vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2

3.四、部署:

3.4.一、建立api-server 啓動腳本:

[root@k8s-master1 src]# cat  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=192.168.100.101 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.100.105:2379,https://192.168.100.106:2379,https://192.168.100.107:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3.4.2啓動並驗證api-server:

[root@k8s-master1 src]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status  kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-05-29 11:05:15 CST; 19s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 1053 (kube-apiserver)
    Tasks: 14
   Memory: 258.0M
   CGroup: /system.slice/kube-apiserver.service
           └─1053 /opt/kubernetes/bin/kube-apiserver --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestr...

May 29 11:05:03 k8s-master1.example.com systemd[1]: Starting Kubernetes API Server...
May 29 11:05:03 k8s-master1.example.com kube-apiserver[1053]: Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-ad...version.
May 29 11:05:03 k8s-master1.example.com kube-apiserver[1053]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.

將啓動腳本複製到server,更改--bind-address 爲server2 IP地址,而後重啓server 2的api-server並驗證

3.4.三、配置Controller Manager服務:

[root@k8s-master1 src]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

複製啓動二進制文件

[root@k8s-master1 src]# cp kubernetes/server/bin/kube-controller-manager /opt/kubernetes/bin/

將啓動文件和啓動腳本scp到master2並啓動服務和驗證

3.4.四、啓動並驗證kube-controller-manager:

[root@k8s-master1 src]# systemctl restart kube-controller-manager &&  systemctl status  kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-05-29 11:14:24 CST; 7s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 1790 (kube-controller)
    Tasks: 8
   Memory: 11.0M
   CGroup: /system.slice/kube-controller-manager.service
           └─1790 /opt/kubernetes/bin/kube-controller-manager --address=127.0.0.1 --master=http://127.0.0.1:8080 --allocate-node-cidrs=true --service-cluster-ip-r...

May 29 11:14:24 k8s-master1.example.com systemd[1]: Started Kubernetes Controller Manager.

3.4.五、部署Kubernetes Scheduler:

[root@k8s-master1 src]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

3.4.六、準備啓動二進制:

[root@k8s-master1 src]# cp kubernetes/server/bin/kube-scheduler  /opt/kubernetes/bin/
[root@k8s-master1 src]# scp /opt/kubernetes/bin/kube-scheduler  192.168.100.102:/opt/kubernetes/bin/

3.4.七、啓動並驗證服務:

systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status  kube-scheduler
[root@k8s-master1 kube-master]# systemctl status  kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-05-29 11:14:05 CST; 8min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 1732 (kube-scheduler)
    Tasks: 13
   Memory: 8.5M
   CGroup: /system.slice/kube-scheduler.service
           └─1732 /opt/kubernetes/bin/kube-scheduler --address=127.0.0.1 --master=http://127.0.0.1:8080 --leader-elect=true --v=2 --logtostderr=false --log-dir=/o...

3.五、部署kubectl 命令行工具:

[root@k8s-master1 src]# cp kubernetes/client/bin/kubectl  /opt/kubernetes/bin/
[root@k8s-master1 src]# scp /opt/kubernetes/bin/kubectl  192.168.100.102:/opt/kubernetes/bin/
[root@k8s-master1 src]# ln -sv /opt/kubernetes/bin/kubectl  /usr/bin/

3.5.一、建立 admin 證書籤名請求:

vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

3.5.二、生成 admin 證書和私鑰:

[root@k8s-master1 src]#  cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem     -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@k8s-master1 src]# ll admin*
-rw-r--r-- 1 root root 1009 Jul 11 22:51 admin.csr
-rw-r--r-- 1 root root  229 Jul 11 22:50 admin-csr.json
-rw------- 1 root root 1679 Jul 11 22:51 admin-key.pem
-rw-r--r-- 1 root root 1399 Jul 11 22:51 admin.pem
[root@k8s-master1 src]# cp admin*.pem /opt/kubernetes/ssl/

3.5.三、設置集羣參數:

master1:
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://192.168.100.112:6443

master2:
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://192.168.100.112:6443

3.5.四、設置客戶端認證參數:

[root@k8s-master1 src]# kubectl config set-credentials admin \
    --client-certificate=/opt/kubernetes/ssl/admin.pem \
    --embed-certs=true \
    --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.

[root@k8s-master2 src]# kubectl config set-credentials admin \
    --client-certificate=/opt/kubernetes/ssl/admin.pem \
    --embed-certs=true \
    --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.

3.5.五、設置上下文參數:

[root@k8s-master1 src]#  kubectl config set-context kubernetes \
    --cluster=kubernetes  --user=admin  --user=admin
Context "kubernetes" created.

[root@k8s-master2 src]#  kubectl config set-context kubernetes     --cluster=kubernetes     --user=admin
Context "kubernetes" created.

3.5.六、設置默認上下文:

[root@k8s-master2 src]# kubectl config use-context kubernetes
Switched to context "kubernetes".

[root@k8s-master1 src]# kubectl config use-context kubernetes
Switched to context "kubernetes".

至此master部署完成,能夠執行下命令測試下

[root@k8s-master1 ~]# kubectl get pods
No resources found
[root@k8s-master1 ~]# kubectl get nodes
No resources found.
相關文章
相關標籤/搜索