CentOS上手工部署kubernetes集羣

本文徹底是根據二進制部署kubernets集羣的全部步驟,同時開啓了集羣的TLS安全認證。html

環境說明

在下面的步驟中,咱們將在三臺CentOS系統的物理機上部署具備三個節點的kubernetes1.7.0集羣。node

角色分配以下:linux

鏡像倉庫:172.16.138.100,域名爲 harbor.suixingpay.com,爲私有鏡像倉庫,請替換爲公共倉庫或你本身的鏡像倉庫地址。nginx

Master:172.16.138.171git

Node:172.16.138.172,172.16.138.173github

注意:172.16.138.171這臺主機master和node複用。全部生成證書、執行kubectl命令的操做都在這臺節點上執行。一旦node加入到kubernetes集羣以後就不須要再登錄node節點了。web

安裝前的準備

一、在node節點上安裝docker1.17.03.2.cedocker

二、關閉全部節點的SELinuxexpress

永久方法 – 須要重啓服務器apache

修改/etc/selinux/config文件中設置SELINUX=disabled ,而後重啓服務器。

臨時方法 – 設置系統參數

使用命令setenforce 0

三、準備harbor私有鏡像倉庫

參考:https://github.com/vmware/harbor

提示

因爲啓用了 TLS 雙向認證、RBAC 受權等嚴格的安全機制,建議從頭開始部署,而不要從中間開始,不然可能會認證、受權等失敗!

一、建立TLS證書和密鑰

kubernetes 系統的各組件須要使用 TLS 證書對通訊進行加密,本文檔使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 和其它證書;

生成的 CA 證書和祕鑰文件以下:

  • ca-key.pem
  • ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

使用證書的組件以下:

  • etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kubelet:使用 ca.pem;
  • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  • kubectl:使用 ca.pem、admin-key.pem、admin.pem;
  • kube-controller-manager:使用 ca-key.pem、ca.pem;

注意:如下操做都在 master 節點即 172.16.138.171 這臺主機上執行,證書只須要建立一次便可,之後在向集羣中添加新節點時只要將 /etc/kubernetes/ 目錄下的證書拷貝到新節點上便可。

安裝 CFSSL

直接使用二進制源碼包安裝

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

建立 CA (Certificate Authority)

建立 CA 配置文件

mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根據config.json文件的格式建立以下的ca-config.json文件
# 過時時間設置成了 87600h
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

字段說明

  • ca-config.json:能夠定義多個 profiles,分別指定不一樣的過時時間、使用場景等參數;後續在簽名證書時使用某個 profile;
  • signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE
  • server auth:表示client能夠用該 CA 對server提供的證書進行驗證;
  • client auth:表示server能夠用該CA對client提供的證書進行驗證;

建立 CA 證書籤名請求

建立 ca-csr.json 文件,內容以下:

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ],
    "ca": {
       "expiry": "87600h"
    }
}
  • "CN":Common Name,kube-apiserver 從證書中提取該字段做爲請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法;
  • "O":Organization,kube-apiserver 從證書中提取該字段做爲請求用戶所屬的組 (Group);

生成 CA 證書和私鑰

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

建立 kubernetes 證書

建立 kubernetes 證書籤名請求文件 kubernetes-csr.json

{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.16.138.100",
      "172.16.138.171",
      "172.16.138.172",
      "172.16.138.173",
      "10.254.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
  • 若是 hosts 字段不爲空則須要指定受權使用該證書的 IP 或域名列表,因爲該證書後續被 etcd 集羣和 kubernetes master 集羣使用,因此上面分別指定了 etcd 集羣、kubernetes master 集羣的主機 IP 和 kubernetes 服務的服務 IP(通常是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 10.254.0.1)。
  • 這是最小化安裝的kubernetes集羣,包括一個私有鏡像倉庫,三個節點的kubernetes集羣,以上物理節點的IP也能夠更換爲主機名。

生成 kubernetes 證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kubernetes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

建立 admin 證書

建立 admin 證書籤名請求文件 admin-csr.json

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • 後續 kube-apiserver 使用 RBAC 對客戶端(如 kubeletkube-proxyPod)請求進行受權;
  • kube-apiserver 預約義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver全部 API的權限;
  • O 指定該證書的 Group 爲 system:masterskubelet 使用該證書訪問 kube-apiserver 時 ,因爲證書被 CA 簽名,因此認證經過,同時因爲證書用戶組爲通過預受權的 system:masters,因此被授予訪問全部 API 的權限;

注意:這個admin 證書,是未來生成管理員用的kube config 配置文件用的,如今咱們通常建議使用RBAC 來對kubernetes 進行角色權限控制, kubernetes 將證書中的CN 字段 做爲User, O 字段做爲 Group

在搭建完 kubernetes 集羣后,咱們能夠經過命令: kubectl get clusterrolebinding cluster-admin -o yaml ,查看到 clusterrolebinding cluster-admin 的 subjects 的 kind 是 Group,name 是 system:mastersroleRef 對象是 ClusterRole cluster-admin。 意思是凡是 system:masters Group 的 user 或者 serviceAccount 都擁有 cluster-admin 的角色。 所以咱們在使用 kubectl 命令時候,才擁有整個集羣的管理權限。可使用 kubectl get clusterrolebinding cluster-admin -o yaml 來查看。

生成 admin 證書和私鑰:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

建立 kube-proxy 證書

建立 kube-proxy 證書籤名請求文件 kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • CN 指定該證書的 User 爲 system:kube-proxy
  • kube-apiserver 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

生成 kube-proxy 客戶端證書和私鑰

校驗證書

以 kubernetes 證書爲例

$ openssl x509  -noout -text -in  kubernetes.pem
.......
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
        Validity
            Not Before: May  8 07:32:00 2018 GMT
            Not After : May  5 07:32:00 2028 GMT
        Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
.........   
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                E8:56:76:B4:91:C6:E2:62:BA:9D:31:48:30:B8:EA:B8:11:C9:24:A8
            X509v3 Authority Key Identifier: 
.........
   
  • 確認 Issuer 字段的內容和 ca-csr.json 一致;
  • 確認 Subject 字段的內容和 kubernetes-csr.json 一致;
  • 確認 X509v3 Subject Alternative Name 字段的內容和 kubernetes-csr.json 一致;
  • 確認 X509v3 Key Usage、Extended Key Usage 字段的內容和 ca-config.jsonkubernetes profile 一致;

使用 cfssl-certinfo 命令

cfssl-certinfo -cert kubernetes.pem
{
  "subject": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "BeiJing",
    "province": "BeiJing",
    "names": [
      "CN",
      "BeiJing",
      "BeiJing",
      "k8s",
      "System",
      "kubernetes"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "BeiJing",
    "province": "BeiJing",
    "names": [
      "CN",
      "BeiJing",
      "BeiJing",
      "k8s",
      "System",
      "kubernetes"
    ]
  },
  "serial_number": "149579304534773967289155011801948025930902452922",
  "sans": [
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "172.16.138.100",
    "172.16.138.171",
    "172.16.138.172",
    "172.16.138.173",
    "10.254.0.1"
  ],
  "not_before": "2018-05-08T07:32:00Z",
  "not_after": "2028-05-05T07:32:00Z",
  "sigalg": "SHA256WithRSA",

分發證書

將生成的證書和祕鑰文件(後綴名爲.pem)拷貝到全部機器的 /etc/kubernetes/ssl 目錄下備用;

 mkdir -p /etc/kubernetes/ssl
 cp *.pem /etc/kubernetes/ssl

 

二、安裝kubectl命令行工具

下載 kubectl

注意請下載對應的Kubernetes版本的安裝包。

wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kube* /usr/bin/
chmod a+x /usr/bin/kube*

三、建立 kubeconfig 文件

建立 TLS Bootstrapping Token

Token auth file

Token能夠是任意的包含128 bit的字符串,可使用安全的隨機數發生器生成。

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

注意:在進行後續操做前請檢查 token.csv 文件,確認其中的 ${BOOTSTRAP_TOKEN} 環境變量已經被真實的值替換。

BOOTSTRAP_TOKEN 將被寫入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,若是後續從新生成了 BOOTSTRAP_TOKEN,則須要:

  1. 更新 token.csv 文件,分發到全部機器 (master 和 node)的 /etc/kubernetes/ 目錄下,分發到node節點上非必需;
  2. 從新生成 bootstrap.kubeconfig 文件,分發到全部 node 機器的 /etc/kubernetes/ 目錄下;
  3. 重啓 kube-apiserver 和 kubelet 進程;
  4. 從新 approve kubelet 的 csr 請求;
cp token.csv /etc/kubernetes/

建立 kubelet bootstrapping kubeconfig 文件

 cd /etc/kubernetes
 export KUBE_APISERVER="https://172.16.138.171:6443"

# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  • --embed-certstrue 時表示將 certificate-authority 證書寫入到生成的 bootstrap.kubeconfig 文件中;
  • 設置客戶端認證參數時沒有指定祕鑰和證書,後續由 kube-apiserver 自動生成;

建立 kube-proxy kubeconfig 文件

export KUBE_APISERVER="https://172.16.138.171:6443"
# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

#  設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 設置集羣參數和客戶端認證參數時 --embed-certs 都爲 true,這會將 certificate-authorityclient-certificateclient-key 指向的證書文件內容寫入到生成的 kube-proxy.kubeconfig 文件中;
  • kube-proxy.pem 證書中 CN 爲 system:kube-proxykube-apiserver 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

安裝kubectl命令行工具

export KUBE_APISERVER="https://172.16.138.171:6443"
# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER}

# 設置客戶端認證參數
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true \
  --client-key=/etc/kubernetes/ssl/admin-key.pem

# 設置上下文參數
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin

# 設置默認上下文
kubectl config use-context kubernetes
  • admin.pem 證書 OU 字段值爲 system:masterskube-apiserver 預約義的 RoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 相關 API 的權限;
  • 生成的 kubeconfig 被保存到 ~/.kube/config 文件;

注意:~/.kube/config文件擁有對該集羣的最高權限,請妥善保管。

 

分發 kubeconfig 文件

將兩個 kubeconfig 文件分發到全部 Node 機器的 /etc/kubernetes/ 目錄

cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/

四、建立高可用 etcd 集羣

TLS 認證文件

須要爲 etcd 集羣建立加密通訊的 TLS 證書,這裏複用之前建立的 kubernetes 證書

cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
  • kubernetes 證書的 hosts 字段列表中包含上面三臺機器的 IP,不然後續證書校驗會失敗;

下載二進制文件

https://github.com/coreos/etcd/releases 頁面下載最新版本的二進制文件

wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
tar -xvf etcd-v3.1.5-linux-amd64.tar.gz
mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin

建立 etcd 的 systemd unit 文件

在/usr/lib/systemd/system/目錄下建立文件etcd.service,內容以下。注意替換IP地址爲你本身的etcd集羣的主機IP。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
  --name ${ETCD_NAME} \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
  --initial-cluster infra1=https://172.16.138.171:2380,infra2=https://172.16.138.172:2380,infra3=https://172.16.138.173:2380 \
  --initial-cluster-state new \
  --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 指定 etcd 的工做目錄爲 /var/lib/etcd,數據目錄爲 /var/lib/etcd,需在啓動服務前建立這個目錄,不然啓動服務的時候會報錯「Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory」;
  • 爲了保證通訊安全,須要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通訊的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
  • 建立 kubernetes.pem 證書時使用的 kubernetes-csr.json 文件的 hosts 字段包含全部 etcd 節點的IP,不然證書校驗會出錯;
  • --initial-cluster-state 值爲 new 時,--name 的參數值必須位於 --initial-cluster 列表中;

環境變量配置文件/etc/etcd/etcd.conf

# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.138.171:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.138.171:2379"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.138.171:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.138.171:2379"

這是172.16.138.171節點的配置,其餘兩個etcd節點只要將上面的IP地址改爲相應節點的IP地址便可。ETCD_NAME換成對應節點的infra1/2/3。

啓動 etcd 服務

mv etcd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

在全部的 kubernetes master 節點重複上面的步驟,直到全部機器的 etcd 服務都已啓動。

驗證服務

在任意 kubernetes master 機器上執行以下命令:

$ etcdctl \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  cluster-health
2018-05-08 05:55:53.668852 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-05-08 05:55:53.670937 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member ab044f0f6d623edf is healthy: got healthy result from https://172.16.138.173:2379
member cf3528b42907470b is healthy: got healthy result from https://172.16.138.172:2379
member eab584ea44e13ad4 is healthy: got healthy result from https://172.16.138.171:2379
cluster is healt

五、 部署master節點

kubernetes master 節點包含的組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

目前這三個組件須要部署在同一臺機器上。

  • kube-schedulerkube-controller-managerkube-apiserver 三者的功能緊密相關;
  • 同時只能有一個 kube-schedulerkube-controller-manager 進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leader;

TLS 證書文件

$ ls /etc/kubernetes/ssl
admin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  kubernetes-key.pem  kubernetes.pem

下載最新版本的二進制文件

changelog下載 clientserver tar包 文件

server 的 tarball kubernetes-server-linux-amd64.tar.gz 已經包含了 client(kubectl) 二進制文件,因此不用單獨下載kubernetes-client-linux-amd64.tar.gz文件;

wget https://dl.k8s.io/v1.7.16/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz

將二進制文件拷貝到指定路徑

cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

配置和啓動 kube-apiserver

建立 kube-apiserver的service配置文件

service配置文件/usr/lib/systemd/system/kube-apiserver.service內容:

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

/etc/kubernetes/config文件的內容爲:

# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://172.16.138.171:8080"

該配置文件同時被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。

apiserver配置文件/etc/kubernetes/apiserver內容爲:

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"
KUBE_API_ADDRESS="--advertise-address=172.16.138.171 --bind-address=172.16.138.171 --insecure-bind-address=172.16.138.171"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-por
t-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/ku
bernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-coun
t=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

 

  •  --experimental-bootstrap-token-auth Bootstrap Token Authentication在1.9版本已經變成了正式feature,參數名稱改成--enable-bootstrap-token-auth
  • 若是中途修改過--service-cluster-ip-range地址,則必須將default命名空間的kubernetes的service給刪除,使用命令:kubectl delete service kubernetes,而後系統會自動用新的ip重建這個service,否則apiserver的log有報錯the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate
  • --authorization-mode=RBAC 指定在安全端口使用 RBAC 受權模式,拒絕未經過受權的請求;
  • kube-scheduler、kube-controller-manager 通常和 kube-apiserver 部署在同一臺機器上,它們使用非安全端口和 kube-apiserver通訊;
  • kubelet、kube-proxy、kubectl 部署在其它 Node 節點上,若是經過安全端口訪問 kube-apiserver,則必須先經過 TLS 證書認證,再經過 RBAC 受權;
  • kube-proxy、kubectl 經過在使用的證書裏指定相關的 User、Group 來達到經過 RBAC 受權的目的;
  • 若是使用了 kubelet TLS Boostrap 機制,則不能再指定 --kubelet-certificate-authority--kubelet-client-certificate--kubelet-client-key 選項,不然後續 kube-apiserver 校驗 kubelet 證書時出現 」x509: certificate signed by unknown authority「 錯誤;
  • --admission-control 值必須包含 ServiceAccount
  • --bind-address 不能爲 127.0.0.1
  • runtime-config配置爲rbac.authorization.k8s.io/v1beta1,表示運行時的apiVersion;
  • --service-cluster-ip-range 指定 Service Cluster IP 地址段,該地址段不能路由可達;
  • 缺省狀況下 kubernetes 對象保存在 etcd /registry 路徑下,能夠經過 --etcd-prefix 參數進行調整;
  • 若是須要開通http的無認證的接口,則能夠增長如下兩個參數:--insecure-port=8080 --insecure-bind-address=127.0.0.1。注意,生產上不要綁定到非127.0.0.1的地址上

啓動kube-apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

配置和啓動 kube-controller-manager

建立 kube-controller-manager的serivce配置文件

文件路徑/usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/controller-manager

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
  • --service-cluster-ip-range 參數指定 Cluster 中 Service 的CIDR範圍,該網絡在各 Node 間必須路由不可達,必須和 kube-apiserver 中的參數一致;
  • --cluster-signing-* 指定的證書和私鑰文件用來簽名爲 TLS BootStrap 建立的證書和私鑰;
  • --root-ca-file 用來對 kube-apiserver 證書進行校驗,指定該參數後,纔會在Pod 容器的 ServiceAccount 中放置該 CA 證書文件
  • --address 值必須爲 127.0.0.1,kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器;

啓動 kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

咱們啓動每一個組件後能夠經過執行命令kubectl get componentstatuses,來查看各個組件的狀態;

$  kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused   
controller-manager   Healthy     ok                                                                                             
etcd-2               Healthy     {"health": "true"}                                                                             
etcd-0               Healthy     {"health": "true"}                                                                             
etcd-1               Healthy     {"health": "true"}                                 

配置和啓動 kube-scheduler

建立 kube-scheduler的serivce配置文件

文件路徑/usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/scheduler

###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
  • --address 值必須爲 127.0.0.1,由於當前 kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器;

啓動 kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

驗證 master 節點功能

$  kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   

六、安裝flannel網絡插件

全部的node節點都須要安裝網絡插件才能讓全部的Pod加入到同一個局域網中,本文是安裝flannel網絡插件的參考文檔。

建議直接使用yum安裝flanneld,除非對版本有特殊需求,默認安裝的是0.7.1版本的flannel。

yum install -y flannel

service配置文件/usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

/etc/sysconfig/flanneld配置文件:

# Flanneld configuration options  
#
# # etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379"
#
# # etcd config key.  This is the configuration key that flannel queries
# # For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
#
# # Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

若是是多網卡(例如vagrant環境),則須要在FLANNEL_OPTIONS中增長指定的外網出口的網卡,例如-iface=eth2

在etcd中建立網絡配置

執行下面的命令爲docker分配IP地址段。

etcdctl --endpoints=https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kube-centos/network

etcdctl --endpoints=https://172.16.138.171:2379,https://172.16.138.171:2379,https://172.16.138.171:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

若是你要使用host-gw模式,能夠直接將vxlan改爲host-gw便可。

啓動flannel

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

如今查詢etcd中的內容能夠看到:

$  etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  ls /kube-centos/network/subnets

/kube-centos/network/subnets/172.30.71.0-24
/kube-centos/network/subnets/172.30.16.0-24
/kube-centos/network/subnets/172.30.58.0-24

$  etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/config

{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

$ etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.14.0-24

{"PublicIP":"172.16.138.171","BackendType":"vxlan","BackendData":{"VtepMAC":"7e:0e:49:74:de:b3"}}

$  etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.16.0-24

{"PublicIP":"172.16.138.172","BackendType":"vxlan","BackendData":{"VtepMAC":"5a:ab:55:02:7f:96"}}

$ etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.58.0-24

{"PublicIP":"172.16.138.173","BackendType":"vxlan","BackendData":{"VtepMAC":"3a:37:7d:55:b7:77"}}

若是能夠查看到以上內容證實flannel已經安裝完成,下一步是在node節點上安裝和配置docker、kubelet、kube-proxy

七、部署node節點

Kubernetes node節點包含以下組件:

  • Flanneld:參考上一節
  • Docker1.17.03:docker的安裝很簡單,這裏也不說了,可是須要注意docker的配置。
  • kubelet:直接用二進制文件安裝
  • kube-proxy:直接用二進制文件安裝

注意:每臺 node 上都須要安裝 flannel,master 節點上能夠不安裝。

步驟簡介

  1. 確認在上一步中咱們安裝配置的網絡插件flannel已啓動且運行正常
  2. 安裝配置docker後啓動
  3. 安裝配置kubelet、kube-proxy後啓動
  4. 驗證

目錄和文件

咱們再檢查一下三個節點上,通過前幾步操做咱們已經建立了以下的證書和配置文件。

$ ls /etc/kubernetes/ssl
admin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  kubernetes-key.pem  kubernetes.pem
$ ls /etc/kubernetes/
apiserver  bootstrap.kubeconfig  config  controller-manager  kubelet  kube-proxy.kubeconfig  proxy  scheduler  ssl  token.csv

配置Docker

yum方式安裝的flannel

修改docker的配置文件/usr/lib/systemd/system/docker.service,增長一條環境變量配置:

EnvironmentFile=-/run/flannel/docker

/run/flannel/docker文件是flannel啓動後自動生成的,其中包含了docker啓動時須要的參數。

啓動docker

重啓了docker後還要重啓kubelet,這時又遇到問題,kubelet啓動失敗。報錯:

Mar 31 16:44:41 k8s-master kubelet[81047]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

這是kubelet與docker的cgroup driver不一致致使的,kubelet啓動的時候有個—cgroup-driver參數能夠指定爲"cgroupfs"或者「systemd」。

--cgroup-driver string                                    Driver that the kubelet uses to manipulate cgroups on the host.  Possible values: 'cgroupfs', 'systemd' (default "cgroupfs")

配置docker的service配置文件/usr/lib/systemd/system/docker.service,設置ExecStart中的--exec-opt native.cgroupdriver=systemd

安裝和配置kubelet

kubelet 啓動時向 kube-apiserver 發送 TLS bootstrapping 請求,須要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper cluster 角色(role), 而後 kubelet 纔能有權限建立認證請求(certificate signing requests):

cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap
  • --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用戶名,同時也寫入了 /etc/kubernetes/bootstrap.kubeconfig 文件;

下載最新的kubelet和kube-proxy二進制文件

注意請下載對應的Kubernetes版本的安裝包。

 

wget https://dl.k8s.io/v1.7.16/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/

建立kubelet的service配置文件

文件位置/usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改成你的每臺node節點的IP地址。

注意:在啓動kubelet以前,須要先手動建立/var/lib/kubelet目錄。

下面是kubelet的配置文件/etc/kubernetes/kubelet:

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=172.16.138.171"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.138.171"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
KUBELET_API_SERVER="--api-servers=http://172.16.138.171:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=harbor.suixingpay.com/kube/pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert
-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
  • 若是使用systemd方式啓動,則須要額外增長兩個參數--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
  • --experimental-bootstrap-kubeconfig 在1.9版本已經變成了--bootstrap-kubeconfig
  • --address 不能設置爲 127.0.0.1,不然後續 Pods 訪問 kubelet 的 API 接口時會失敗,由於 Pods 訪問的 127.0.0.1 指向本身而不是 kubelet;
  • 若是設置了 --hostname-override 選項,則 kube-proxy 也須要設置該選項,不然會出現找不到 Node 的狀況;
  • "--cgroup-driver 配置成 systemd,不要使用cgroup,不然在 CentOS 系統中 kubelet 將啓動失敗(保持docker和kubelet中的cgroup driver配置一致便可,不必定非使用systemd)。
  • --experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
  • 管理員經過了 CSR 請求後,kubelet 自動在 --cert-dir 目錄建立證書和私鑰文件(kubelet-client.crtkubelet-client.key),而後寫入 --kubeconfig 文件;
  • 建議在 --kubeconfig 配置文件中指定 kube-apiserver 地址,若是未指定 --api-servers 選項,則必須指定 --require-kubeconfig 選項後才從配置文件中讀取 kube-apiserver 的地址,不然 kubelet 啓動後將找不到 kube-apiserver (日誌中提示未找到 API Server),kubectl get nodes 不會返回對應的 Node 信息;
  • --cluster-dns 指定 kubedns 的 Service IP(能夠先分配,後續建立 kubedns 服務時指定該 IP),--cluster-domain 指定域名後綴,這兩個參數同時指定後纔會生效;
  • --cluster-domain 指定 pod 啓動時 /etc/resolve.conf 文件中的 search domain ,起初咱們將其配置成了 cluster.local.,這樣在解析 service 的 DNS 名稱時是正常的,但是在解析 headless service 中的 FQDN pod name 的時候卻錯誤,所以咱們將其修改成 cluster.local,去掉最後面的 」點號「 就能夠解決該問題。
  • --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次啓動kubelet以前並不存在,請看下文,當經過CSR請求後會自動生成kubelet.kubeconfig文件,若是你的節點上已經生成了~/.kube/config文件,你能夠將該文件拷貝到該路徑下,並重命名爲kubelet.kubeconfig,全部node節點能夠共用同一個kubelet.kubeconfig文件,這樣新添加的節點就不須要再建立CSR請求就能自動添加到kubernetes集羣中。一樣,在任意可以訪問到kubernetes集羣的主機上使用kubectl --kubeconfig命令操做集羣時,只要使用~/.kube/config文件就能夠經過權限認證,由於這裏面已經有認證信息並認爲你是admin用戶,對集羣擁有全部權限。
  • KUBELET_POD_INFRA_CONTAINER 是基礎鏡像容器,這裏我用的是私有鏡像倉庫地址,你們部署的時候須要修改成本身的鏡像

啓動kublet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

經過 kublet 的 TLS 證書請求

kubelet 首次啓動時向 kube-apiserver 發送證書籤名請求,必須經過後 kubernetes 系統纔會將該 Node 加入到集羣。

查看未受權的 CSR 請求

$ kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-0bi8ZxaLgRc4fUV1sGSsG6II84MMlEg-4ttACLGq3AE   21s       kubelet-bootstrap   Pending
$ kubectl get nodes
No resources found.

經過 CSR 請求

$ kubectl certificate approve node-csr-0bi8ZxaLgRc4fUV1sGSsG6II84MMlEg-4ttACLGq3AE
certificatesigningrequest "node-csr-0bi8ZxaLgRc4fUV1sGSsG6II84MMlEg-4ttACLGq3AE" approved
$ kubectl get nodes
NAME             STATUS    AGE       VERSION
172.16.138.171   Ready     6s        v1.7.16

自動生成了 kubelet kubeconfig 文件和公私鑰

$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2284 Apr  7 02:07 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r-- 1 root root 1046 Apr  7 02:07 /etc/kubernetes/ssl/kubelet-client.crt
-rw------- 1 root root  227 Apr  7 02:04 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- 1 root root 1103 Apr  7 02:07 /etc/kubernetes/ssl/kubelet.crt
-rw------- 1 root root 1675 Apr  7 02:07 /etc/kubernetes/ssl/kubelet.key

假如你更新kubernetes的證書,只要沒有更新token.csv,當重啓kubelet後,該node就會自動加入到kuberentes集羣中,而不會從新發送certificaterequest,也不須要在master節點上執行kubectl certificate approve操做。前提是不要刪除node節點上的/etc/kubernetes/ssl/kubelet*/etc/kubernetes/kubelet.kubeconfig文件。不然kubelet啓動時會提示找不到證書而失敗。

注意:若是啓動kubelet的時候見到證書相關的報錯,有個trick能夠解決這個問題,能夠將master節點上的~/.kube/config文件(該文件在安裝kubectl命令行工具這一步中將會自動生成)拷貝到node節點的/etc/kubernetes/kubelet.kubeconfig位置,這樣就不須要經過CSR,當kubelet啓動後就會自動加入的集羣中。

配置 kube-proxy

安裝conntrack

yum install -y conntrack-tools

建立 kube-proxy 的service配置文件

文件路徑/usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

kube-proxy配置文件/etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=172.16.138.171 --hostname-override=172.16.138.171 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

 

  • --hostname-override 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 iptables 規則;
  • kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr--masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;
  • --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用戶名、證書、祕鑰等請求和認證信息;
  • 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

啓動 kube-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

驗證

咱們建立一個nginx的service試一下集羣是否可用。

$  kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=index.tenxcloud.com/xjimmy/nginx:1.9.4  --port=80
deployment "nginx" created
$  kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed
$  kubectl describe svc example-service
Name:                   example-service
Namespace:              default
Labels:                 run=load-balancer-example
Annotations:            <none>
Selector:               run=load-balancer-example
Type:                   NodePort
IP:                     10.254.173.196
Port:                   <unset> 80/TCP
NodePort:               <unset> 31498/TCP
Endpoints:              172.17.0.2:80,172.17.0.3:80
Session Affinity:       None
Events:                 <none>
$  curl  10.254.173.196:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

八、安裝kubedns插件

官方的yaml文件目錄:kubernetes/cluster/addons/dns

該插件直接使用kubernetes部署,官方的配置文件中包含如下鏡像:

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1

我clone了上述鏡像,上傳到個人私有鏡像倉庫:

harbor.suixingpay.com/kube1.6/k8s-dns-kube-dns-amd64:1.14.1
harbor.suixingpay.com/kube1.6/k8s-dns-sidecar-amd64:1.14.1
harbor.suixingpay.com/kube1.6/k8s-dns-dnsmasq-nanny-amd64:1.14.1

如下yaml配置文件中使用的是私有鏡像倉庫中的鏡像。

kubedns-cm.yaml  
kubedns-sa.yaml  
kubedns-controller.yaml  
kubedns-svc.yaml

 系統預約義的 RoleBinding

預約義的 RoleBinding system:kube-dns 將 kube-system 命名空間的 kube-dns ServiceAccount 與 system:kube-dns Role 綁定, 該 Role 具備訪問 kube-apiserver DNS 相關 API 的權限;

$ kubectl get clusterrolebindings system:kube-dns -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2018-05-10T02:17:04Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-dns
  resourceVersion: "91"
  selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings/system%3Akube-dns
  uid: 3b753e98-53f8-11e8-9a54-00505693535c
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-dns
subjects:
- kind: ServiceAccount
  name: kube-dns
  namespace: kube-system

kubedns-controller.yaml 中定義的 Pods 時使用了 kubedns-sa.yaml 文件定義的 kube-dns ServiceAccount,因此具備訪問 kube-apiserver DNS 相關 API 的權限。

配置 kube-dns ServiceAccount

無需配置

配置 kube-dns 服務

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  • spec.clusterIP = 10.254.0.2,即明確指定了 kube-dns Service IP,這個 IP 須要和 kubelet 的 --cluster-dns 參數值一致;

配置kube-dns Deployment

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# __MACHINE_GENERATED_WARNING__

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: harbor.suixingpay.com/kube1.6/k8s-dns-kube-dns-amd64:1.14.1
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        #__PILLAR__FEDERATIONS__DOMAIN__MAP__
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: harbor.suixingpay.com/kube1.6/k8s-dns-dnsmasq-nanny-amd64:1.14.1
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --log-facility=-
        - --server=/cluster.local./127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: harbor.suixingpay.com/kube1.6/k8s-dns-sidecar-amd64:1.14.1
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns
  • 使用系統已經作了 RoleBinding 的 kube-dns ServiceAccount,該帳戶具備訪問 kube-apiserver DNS 相關 API 的權限;

執行全部定義文件

$ pwd
/root/kubedns
$ ls *.yaml
kubedns-cm.yaml  kubedns-controller.yaml  kubedns-sa.yaml  kubedns-svc.yaml
$ kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" create

檢查 kubedns 功能

新建一個 Deployment

$ cat  my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: index.tenxcloud.com/xjimmy/nginx:1.9.4
        ports:
        - containerPort: 80

Export 該 Deployment, 生成 my-nginx 服務

$ kubectl expose deploy my-nginx
$ kubectl get services --all-namespaces |grep my-nginx
default       my-nginx          10.254.89.137    <none>        80/TCP          5s

建立另外一個 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns--cluster-domain,是否可以將服務 my-nginx 解析到 Cluster IP 10.254.89.137

 $  kubectl get pods --all-namespaces
NAMESPACE     NAME                        READY     STATUS    RESTARTS   AGE
default       my-nginx-3466650801-bngns   1/1       Running   0          1h
default       my-nginx-3466650801-q8gmv   1/1       Running   0          1h
default       nginx-608366207-621q4       1/1       Running   0          22h
default       nginx-608366207-84z2w       1/1       Running   0          22h
kube-system   kube-dns-1041264494-l5lkl   3/3       Running   0          1h

$ kubectl get services --all-namespaces
NAMESPACE     NAME              CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
default       example-service   10.254.173.196   <nodes>       80:31498/TCP    20h
default       kubernetes        10.254.0.1       <none>        443/TCP         1d
default       my-nginx          10.254.89.137    <none>        80/TCP          3m
kube-system   kube-dns          10.254.0.2       <none>        53/UDP,53/TCP   6m
$ kubectl exec my
-nginx-3466650801-bngns -i -t -- /bin/bash root@my-nginx-3466650801-bngns:~# cat /etc/resolv.conf nameserver 10.254.0.2 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 root@my-nginx-3466650801-bngns:~# ping my-nginx PING my-nginx.default.svc.cluster.local (10.254.89.137): 56 data bytes ^C--- my-nginx.default.svc.cluster.local ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss root@my-nginx-3466650801-bngns:~# ping kubernetes PING kubernetes.default.svc.cluster.local (10.254.0.1): 56 data bytes ^C--- kubernetes.default.svc.cluster.local ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss root@my-nginx-3466650801-bngns:~# ping example-service PING example-service.default.svc.cluster.local (10.254.173.196): 56 data bytes ^C--- example-service.default.svc.cluster.local ping statistics --- 1 packets transmitted, 0 packets received, 100% packet loss

從結果來看,service名稱能夠正常解析。

注意:直接ping ClusterIP是ping不通的,ClusterIP是根據IPtables路由到服務的endpoint上,只有結合ClusterIP加端口才能訪問到對應的服務。

九、安裝dashboard插件

官方文件目錄:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

咱們使用的文件以下:

$ ls *.yaml
dashboard-controller.yaml  dashboard-service.yaml dashboard-rbac.yaml

因爲 kube-apiserver 啓用了 RBAC 受權,而官方源碼目錄的 dashboard-controller.yaml 沒有定義受權的 ServiceAccount,因此後續訪問 API server 的 API 時會被拒絕,web中提示:

orbidden (403)

User "system:serviceaccount:kube-system:default" cannot list jobs.batch in the namespace "default". (get jobs.batch)

增長了一個dashboard-rbac.yaml文件,定義一個名爲 dashboard 的 ServiceAccount,而後將它和 Cluster Role view 綁定,以下:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard
subjects:
  - kind: ServiceAccount
    name: dashboard
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

而後使用kubectl apply -f dashboard-rbac.yaml建立。

配置dashboard-service

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

配置dashboard-controller

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: dashboard
      containers:
      - name: kubernetes-dashboard
        image: harbor.suixingpay.com/kube1.6/kubernetes-dashboard-amd64:v1.6.0
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

執行全部定義文件

$ pwd
/root/kubedashboard
$ ls *.yaml
dashboard-controller.yaml  dashboard-service.yaml
$ kubectl create -f  .
service "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created

 檢查執行結果

查看分配的 NodePort

$ kubectl get services kubernetes-dashboard -n kube-system
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   10.254.166.88   <nodes>       80:31304/TCP   14s
  • NodePort 31304映射到 dashboard pod 80端口;

檢查 controller

$ kubectl get deployment kubernetes-dashboard  -n kube-system
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1         1         1            1           20s
$ kubectl get pods  -n kube-system | grep dashboard
kubernetes-dashboard-2428500324-34tbz   1/1       Running   0          25s

訪問dashboard

有如下三種方式:

  • kubernetes-dashboard 服務暴露了 NodePort,可使用 http://NodeIP:nodePort 地址訪問 dashboard
  • 經過 API server 訪問 dashboard(https 6443端口和http 8080端口方式)
  • 經過 kubectl proxy 訪問 dashboard

經過 kubectl proxy 訪問 dashboard

啓動代理

$  kubectl proxy --address='172.16.138.171' --port=8086 --accept-hosts='^*$' 
Starting to serve on 172.16.138.171:8086
  • 須要指定 --accept-hosts 選項,不然瀏覽器訪問 dashboard 頁面時提示 「Unauthorized」;

瀏覽器訪問 URL:http://172.16.138.171:8086/ui 自動跳轉到:http://172.16.138.171:8086/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default

經過 API server 訪問dashboard

獲取集羣服務地址列表

$  kubectl cluster-info
Kubernetes master is running at https://172.16.138.171:6443
KubeDNS is running at https://172.16.138.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://172.16.138.171:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

瀏覽器訪問 https://172.16.138.171:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard(瀏覽器會提示證書驗證,由於經過加密通道,以改方式訪問的話,須要提早導入證書到你的計算機中)。

若是你不想使用https的話,能夠直接訪問insecure port 8080端口:http://172.16.138.171:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

 

十、安裝heapster插件

準備YAML文件

wget https://github.com/kubernetes/heapster/archive/v1.3.0.zip
unzip v1.3.0.zip
mv v1.3.0.zip heapster-1.3.0

文件目錄: heapster-1.3.0/deploy/kube-config/influxdb

$ cd heapster-1.3.0/deploy/kube-config/influxdb
$ ls *.yaml
grafana-deployment.yaml  grafana-service.yaml  heapster-deployment.yaml  heapster-service.yaml  influxdb-deployment.yaml  influxdb-service.yaml heapster-rbac.yaml

咱們本身建立了heapster的rbac配置heapster-rbac.yaml

配置 grafana-deployment 

grafana-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: harbor.suixingpay.com/kube1.6/heapster-grafana-amd64:v4.0.2
        ports:
          - containerPort: 3000
            protocol: TCP
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GRAFANA_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
          value: /
      volumes:
      - name: grafana-storage
        emptyDir: {}
  • 若是後續使用 kube-apiserver 或者 kubectl proxy 訪問 grafana dashboard,則必須將 GF_SERVER_ROOT_URL 設置爲 /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/,不然後續訪問grafana時訪問時提示找不到http://172.16.138.171:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/api/dashboards/home 頁面;

配置 heapster-deployment

heapster-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      containers:
      - name: heapster
        image: harbor.suixingpay.com/kube1.6/heapster-amd64:v1.3.0-beta.1
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb:8086

配置 influxdb-deployment

$ # 導出鏡像中的 influxdb 配置文件
$ docker run --rm --entrypoint 'cat'  -ti lvanneo/heapster-influxdb-amd64:v1.1.1 /etc/config.toml >config.toml.orig
$ cp config.toml.orig config.toml
$ # 修改:啓用 admin 接口
$ vim config.toml
$ diff config.toml.orig config.toml
35c35
<   enabled = false
---
>   enabled = true
$ # 將修改後的配置寫入到 ConfigMap 對象中
$ kubectl create configmap influxdb-config --from-file=config.toml  -n kube-system
configmap "influxdb-config" created
$ # 將 ConfigMap 中的配置文件掛載到 Pod 中,達到覆蓋原始配置的目的
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: harbor.suixingpay.com/kube1.6/heapster-influxdb-amd64:v1.1.1
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
        - mountPath: /etc/config.toml
          name: influxdb-config
      volumes:
      - name: influxdb-storage
        emptyDir: {}
      - name: influxdb-config
        configMap:
          name: influxdb-config

配置 monitoring-influxdb Service

apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 8086
    targetPort: 8086
    name: http
  - port: 8083
    targetPort: 8083
    name: admin
  selector:
    k8s-app: influxdb
  • 定義端口類型爲 NodePort,額外增長了 admin 端口映射,用於後續瀏覽器訪問 influxdb 的 admin UI 界面;

配置  heapster-rbac

$  vim heapster-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
subjects:
  - kind: ServiceAccount
    name: heapster
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

 

執行全部定義文件

$ pwd
/root/heapster-1.3.0/deploy/kube-config/influxdb
$ ls *.yaml grafana-service.yaml heapster-rbac.yaml influxdb-cm.yaml influxdb-service.yaml grafana-deployment.yaml heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml $ kubectl create -f . deployment "monitoring-grafana" created service "monitoring-grafana" created deployment "heapster" created serviceaccount "heapster" created clusterrolebinding "heapster" created service "heapster" created configmap "influxdb-config" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created

 檢查執行結果

檢查 Deployment

$ kubectl get deployments -n kube-system | grep -E 'heapster|monitoring'
heapster               1         1         1            1           2m
monitoring-grafana     1         1         1            1           2m
monitoring-influxdb    1         1         1            1           2m

檢查 Pods

$ kubectl get pods -n kube-system | grep -E 'heapster|monitoring'
heapster-110704576-gpg8v                1/1       Running   0          2m
monitoring-grafana-2861879979-9z89f     1/1       Running   0          2m
monitoring-influxdb-1411048194-lzrpc    1/1       Running   0          2m
相關文章
相關標籤/搜索