centos7.4安裝高可用(haproxy+keepalived實現)kubernetes1.6.0集羣(開啓TLS認證)

目錄

 

 

前言

本文檔使用haproxy+keepalived的方式部署爲集羣高可用模式,lvs+keepalived也能夠實現,理論上來講能夠照搬用於生產。本文檔最初是基於kubenetes1.6版本編寫的,對於kuberentes1.8及以上版本一樣適用,只是個別位置有稍許變更,變更的地方將特別註明版本要求。css

本系列文檔介紹使用二進制部署 kubernetes 集羣的全部步驟,而不是使用 kubeadm 等自動化方式來部署集羣,同時開啓了集羣的TLS安全認證,安裝時使用vmvare建立的虛擬機,理論上適用於全部bare metal環境、on-premise環境和公有云環境。html

在部署的過程當中,將詳細列出各組件的啓動參數,給出配置文件,詳解它們的含義和可能遇到的問題。前端

部署完成後,你將理解系統各組件的交互原理,進而能快速解決實際問題。node

因此本文檔主要適合於那些有必定 kubernetes 基礎,想經過一步步部署的方式來學習和了解系統配置、運行原理的人。mysql

集羣詳情

  • OS:CentOS Linux release 7.4.1708 (Core) 3.10.0-693.el7.x86_64
  • Kubernetes 1.6.0+(最低的版本要求是1.6)
  • haproxy(yum安裝)
  • keepalived(yum安裝)
  • docker-1.13.1 or docker-ce-17.12.1(使用yum安裝 or rpm)
  • etcd-3.2.15(使用yum安裝)
  • flannel-0.7.1 vxlan 或者 host-gw 網絡(使用yum安裝)
  • TLS 認證通訊 (全部組件,如 etcd、kubernetes master 和 node)
  • RBAC 受權
  • kubelet TLS BootStrapping
  • kubedns、dashboard、heapster(influxdb、grafana)、EFK(elasticsearch、fluentd、kibana) 集羣插件
  • 私有docker鏡像倉庫harbor(請自行部署,harbor提供離線安裝包,直接使用docker-compose啓動便可)

環境說明

在下面的步驟中,將在8臺CentOS系統的虛擬機上部署高可用集羣。linux

角色分配以下:
keepalived1+haproxy1+etcd1: 192.168.223.201
keepalived2+haproxy2+etcd2: 192.168.223.202
keepalived3+haproxy3+etcd3: 192.168.223.203
Master1: 192.168.223.204
Master2: 192.168.223.205
Node1: 192.168.223.206
Node2: 192.168.223.207
docker+hub: 192.168.223.208
nginx

vip: 192.168.223.200
集羣訪問kube-apiserver使用此地址
git

注意: etcd和keepalived+haproxy複用3臺主機,實際生產最好2臺單獨部署keepalived+haproxy,3臺單獨部署etcdgithub

安裝前準備

  1. 關閉全部節點的SELinux
修改 /etc/selinux/config 文件中設置 SELINUX=disabled setenforce 0 
  1. 關閉全部節點防火牆firewalld
systemctl disable firewalld; systemctl stop firewalld; 
  1.  192.168.223.208 上安裝harbor私有鏡像倉庫

參考教程:https://github.com/vmware/harbor 須要使用到的全部docker images:https://pan.baidu.com/s/1YH6OCpmz8EiO1OlmmxLtfg 密碼:k2mrweb

提醒

  1. 因爲啓用了 TLS 雙向認證、RBAC 受權等嚴格的安全機制,建議從頭開始部署,而不要從中間開始,不然可能會認證、受權等失敗!
  2. 部署過程當中須要有不少證書的操做,請你們耐心操做,不明白的操做能夠參考本書中的其餘章節的解釋。
  3. 該部署操做僅是搭建成了一個可用 kubernetes 集羣,而不少地方還須要進行優化,heapster 插件、EFK 插件不必定會用於真實的生產環境中,可是經過部署這些插件,可讓你們瞭解到如何部署應用到集羣上

如下正式開始部署


1、建立TLS證書和祕鑰

這一步是在安裝配置kubernetes的全部步驟中最容易出錯也最難於排查問題的一步,而這卻恰好是第一步,萬事開頭難,不要由於這點困難就望而卻步。

kubernetes 系統的各組件須要使用 TLS 證書對通訊進行加密,本文檔使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 和其它證書;

生成的 CA 證書和祕鑰文件以下:

  • ca-key.pem
  • ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

使用證書的組件以下:

  • etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kubelet:使用 ca.pem;
  • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  • kubectl:使用 ca.pem、admin-key.pem、admin.pem;
  • kube-controller-manager:使用 ca-key.pem、ca.pem

注意: 如下操做都在 192.168.223.201 主機上執行,而後分發到集羣全部主機,證書只須要建立一次便可,之後在向集羣中添加新節點時只要將 /etc/kubernetes/ 目錄下的證書拷貝到新節點上便可。

安裝CFSSL

直接使用二進制源碼包安裝

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 mv cfssl_linux-amd64 /usr/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 mv cfssljson_linux-amd64 /usr/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo 

建立 CA (Certificate Authority)

建立 CA 配置文件

mkdir /root/ssl
cd /root/ssl
cat > ca-config.json << EOF
{
  "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF 
  • ca-config.json:能夠定義多個 profiles,分別指定不一樣的過時時間、使用場景等參數;後續在簽名證書時使用某個 - profile;
  • signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE;
  • server auth:表示client能夠用該 CA 對server提供的證書進行驗證;
  • client auth:表示server能夠用該CA對client提供的證書進行驗證;

建立 CA 證書籤名請求

建立 ca-csr.json 文件,內容以下:

cat > ca-csr.json << EOF
{
  "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 
  • "CN":Common Name,kube-apiserver 從證書中提取該字段做爲請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法;
  • "O":Organization,kube-apiserver 從證書中提取該字段做爲請求用戶所屬的組 (Group);

生成 CA 證書和私鑰

cfssl gencert -initca ca-csr.json | cfssljson -bare ca ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem 

建立 kubernetes 證書

建立 kubernetes 證書籤名請求文件 kubernetes-csr.json:

cat > kubernetes-csr.json << EOF
{
    "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.223.200", "192.168.223.201", "192.168.223.202", "192.168.223.203", "192.168.223.204", "192.168.223.205", "192.168.223.206", "192.168.223.207", "192.168.223.208", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 
  • 若是 hosts 字段不爲空則須要指定受權使用該證書的 IP 或域名列表,因爲該證書後續被 etcd 集羣和 kubernetes master 集羣使用,因此上面分別指定了 etcd 集羣、kubernetes master 集羣的主機 IP 和 kubernetes 服務的服務 IP(通常是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 10.254.0.1)。
  • 以上節點的IP也能夠更換爲主機名。

生成 kubernetes 證書和私鑰

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem 

建立 admin 證書

建立 admin 證書籤名請求文件 admin-csr.json:

cat > admin-csr.json << EOF
{
  "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF 
  • 後續 kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行受權;
  • kube-apiserver 預約義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 的全部 API的權限;
  • O 指定該證書的 Group 爲 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,因爲證書被 CA 簽名,因此認證經過,同時因爲證書用戶組爲通過預受權的 system:masters,因此被授予訪問全部 API 的權限;

注意: 這個admin 證書,是未來生成管理員用的kube config 配置文件用的,如今咱們通常建議使用RBAC 來對kubernetes 進行角色權限控制, kubernetes 將證書中的CN 字段 做爲User, O 字段做爲 Group。

在搭建完 kubernetes 集羣后,咱們能夠經過命令: kubectl get clusterrolebinding cluster-admin -o yaml ,查看到 clusterrolebinding cluster-admin 的 subjects 的 kind 是 Group,name 是 system:masters。 roleRef 對象是 ClusterRole cluster-admin。 意思是凡是 system:masters Group 的 user 或者 serviceAccount 都擁有 cluster-admin 的角色。 所以咱們在使用 kubectl 命令時候,才擁有整個集羣的管理權限。可使用 kubectl get clusterrolebinding cluster-admin -o yaml 來查看。

kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2017-04-11T11:20:42Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "52" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin uid: e61b97b2-1ea8-11e7-8cd7-f4e9d49f8ed0 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters 

生成 admin 證書和私鑰

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json|cfssljson -bare admin ls admin* admin.csr admin-csr.json admin-key.pem admin.pem 

建立 kube-proxy 證書

建立 kube-proxy 證書籤名請求文件 kube-proxy-csr.json:

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 
  • CN 指定該證書的 User 爲 system:kube-proxy;
  • kube-apiserver 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

生成 kube-proxy 客戶端證書和私鑰

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem 

校驗證書

使用 opsnssl 命令

openssl x509  -noout -text -in  kubernetes.pem
... Signature Algorithm: sha256WithRSAEncryption Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=Kubernetes Validity Not Before: Apr 5 05:36:00 2017 GMT Not After : Apr 5 05:36:00 2018 GMT Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: DD:52:04:43:10:13:A9:29:24:17:3A:0E:D7:14:DB:36:F8:6C:E0:E0 X509v3 Authority Key Identifier: keyid:44:04:3B:60:BD:69:78:14:68:AF:A0:41:13:F6:17:07:13:63:58:CD X509v3 Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:192.168.223.200, IP Address:192.168.223.201, IP Address:192.168.223.202, IP Address:192.168.223.203, IP Address:192.168.223.204, IP Address:192.168.223.205, IP Address:192.168.223.206, IP Address:192.168.223.207, IP Address:192.168.223.208, IP Address:10.254.0.1 ... 
  • 確認 Issuer 字段的內容和 ca-csr.json 一致;
  • 確認 Subject 字段的內容和 kubernetes-csr.json 一致;
  • 確認 X509v3 Subject Alternative Name 字段的內容和 kubernetes-csr.json 一致;
  • 確認 X509v3 Key Usage、Extended Key Usage 字段的內容和 ca-config.json 中 kubernetes profile 一致;

使用 cfssl-certinfo 命令

cfssl-certinfo -cert kubernetes.pem
...
{
  "subject": { "common_name": "kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "kubernetes" ] }, "issuer": { "common_name": "Kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "Kubernetes" ] }, "serial_number": "174360492872423263473151971632292895707129022309", "sans": [ "127.0.0.1", "192.168.223.200", "192.168.223.201", "192.168.223.202", "192.168.223.203", "192.168.223.204", "192.168.223.205", "192.168.223.206", "192.168.223.207", "192.168.223.208", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "not_before": "2017-04-05T05:36:00Z", "not_after": "2018-04-05T05:36:00Z", "sigalg": "SHA256WithRSA", ... 

分發證書

將生成的證書和祕鑰文件(後綴名爲.pem)拷貝到全部機器的 /etc/kubernetes/ssl 目錄下備用;

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
ssh 192.168.223.202 "mkdir -p /etc/kubernetes/ssl" ssh 192.168.223.203 "mkdir -p /etc/kubernetes/ssl" ssh 192.168.223.204 "mkdir -p /etc/kubernetes/ssl" ssh 192.168.223.205 "mkdir -p /etc/kubernetes/ssl" ssh 192.168.223.206 "mkdir -p /etc/kubernetes/ssl" ssh 192.168.223.207 "mkdir -p /etc/kubernetes/ssl" scp *.pem 192.168.223.202:/etc/kubernetes/ssl scp *.pem 192.168.223.203:/etc/kubernetes/ssl scp *.pem 192.168.223.204:/etc/kubernetes/ssl scp *.pem 192.168.223.205:/etc/kubernetes/ssl scp *.pem 192.168.223.206:/etc/kubernetes/ssl scp *.pem 192.168.223.207:/etc/kubernetes/ssl 

2、安裝kubectl命令行工具

通常只需在兩臺master主機安裝便可

下載 kubectl

注意請下載對應的Kubernetes版本的安裝包。

wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz cp kubernetes/client/bin/kube* /usr/bin/ chmod a+x /usr/bin/kube* 

建立 kubectl kubeconfig 文件

export KUBE_APISERVER="https://192.168.223.200:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 設置默認上下文 kubectl config use-context kubernetes 
  • admin.pem 證書 OU 字段值爲 system:masters,kube-apiserver 預約義的 RoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 相關 API 的權限;
  • 生成的 kubeconfig 被保存到 ~/.kube/config 文件;

注意: ~/.kube/config文件擁有對該集羣的最高權限,請妥善保管。若是node節點上須要使用kubelet工具,只需將此文件拷貝過去。

3、建立 kubeconfig 文件

kubelet、kube-proxy 等 Node 機器上的進程與 Master 機器的 kube-apiserver 進程通訊時須要認證和受權;

kuberetes 1.4 開始支持由 kube-apiserver 爲客戶端生成 TLS 證書的 TLS Bootstrapping 功能,這樣就不須要爲每一個客戶端生成證書了;該功能當前僅支持爲 kubelet 生成證書;

如下操做只須要在 master1: 192.168.223.204 節點上執行,生成的 *.kubeconfig 文件能夠直接拷貝到其餘節點的 /etc/kubernetes 目錄下。

建立 TLS Bootstrapping Token

Token能夠是任意的包含128 bit的字符串,可使用安全的隨機數發生器生成。

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF 

注意: 請檢查 token.csv 文件,確認其中的 ${BOOTSTRAP_TOKEN} 環境變量已經被真實的值替換。 **BOOTSTRAP_TOKEN ** 將被寫入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,若是後續從新生成了 BOOTSTRAP_TOKEN,則須要:

更新 token.csv 文件,分發到全部機器 (master 和 node)的 /etc/kubernetes/ 目錄下,分發到node節點上非必需; 從新生成 bootstrap.kubeconfig 文件,分發到全部 node 機器的 /etc/kubernetes/ 目錄下; 重啓 kube-apiserver 和 kubelet 進程; 從新 approve kubelet 的 csr 請求;

cp token.csv /etc/kubernetes/
scp token.csv 192.168.223.205:/etc/kubernetes/ scp token.csv 192.168.223.206:/etc/kubernetes/ scp token.csv 192.168.223.207:/etc/kubernetes/ 

建立 kubelet bootstrapping kubeconfig 文件

cd /etc/kubernetes
export KUBE_APISERVER="https://192.168.223.200:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig 
  • --embed-certs 爲 true 時表示將 certificate-authority 證書寫入到生成的 bootstrap.kubeconfig 文件中;
  • 設置客戶端認證參數時沒有指定祕鑰和證書,後續由 kube-apiserver 自動生成;

建立 kube-proxy kubeconfig 文件

export KUBE_APISERVER="https://192.168.223.200:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 
  • 設置集羣參數和客戶端認證參數時 --embed-certs 都爲 true,這會將 certificate-authority、client-certificate 和 client-key 指向的證書文件內容寫入到生成的 kube-proxy.kubeconfig 文件中;
  • kube-proxy.pem 證書中 CN 爲 system:kube-proxy,kube-apiserver 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

分發 kubeconfig 文件

將兩個 kubeconfig 文件分發到全部 節點機器的 /etc/kubernetes/ 目錄

scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.223.205:/etc/kubernetes/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.223.206:/etc/kubernetes/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.223.207:/etc/kubernetes/ 

4、建立高可用 etcd 集羣

kuberntes 系統使用 etcd 存儲全部數據,本次部署一個三節點高可用 etcd 集羣的步驟,分別爲:192.168.223.20一、192.168.223.20二、192.168.223.203。

TLS 認證文件

須要爲 etcd 集羣建立加密通訊的 TLS 證書,這裏複用之前建立的 kubernetes 證書

ls /etc/kubernetes/ssl/*.pem
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem 
  • kubernetes 證書的 hosts 字段列表中包含上面三臺機器的 IP,不然後續證書校驗會失敗;

安裝etcd

 https://github.com/coreos/etcd/releases 頁面下載最新版本的二進制文件

wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz tar -xvf etcd-v3.1.5-linux-amd64.tar.gz mv etcd-v3.1.5-linux-amd64/etcd* /usr/sbin 

或者直接使用yum命令安裝:

yum install etcd -y 
  • 建議使用yum安裝

建立 etcd 的 systemd unit 文件

vi /usr/lib/systemd/system/etcd.service,內容以下。注意替換IP地址爲你本身的etcd集羣的主機IP。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/sbin/etcd \
  --name ${ETCD_NAME} \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster infra1=https://192.168.223.201:2380,infra2=https://192.168.223.202:2380,infra3=https://192.168.223.203:2380 \ --initial-cluster-state new \ --data-dir=${ETCD_DATA_DIR} Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target 
  • 指定 etcd 的工做目錄爲 /var/lib/etcd,數據目錄爲 /var/lib/etcd,需在啓動服務前建立這個目錄 mkdir -p /var/lib/etcd,不然啓動服務的時候會報錯「Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory」;
  • 爲了保證通訊安全,須要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通訊的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
  • 建立 kubernetes.pem 證書時使用的 kubernetes-csr.json 文件的 hosts 字段包含全部 etcd 節點的IP,不然證書校驗會出錯;
  • --initial-cluster-state 值爲 new 時,--name 的參數值必須位於 --initial-cluster 列表中;

環境變量配置文件 vi /etc/etcd/etcd.conf

# [member] ETCD_NAME=infra1 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.223.201:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.223.201:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.223.201:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.223.201:2379" 

這是192.168.223.201節點的配置,其餘兩個etcd節點只要將上面的IP地址改爲相應節點的IP地址便可。ETCD_NAME換成對應節點的infra1/2/3。

啓動 etcd 服務

systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd 

在全部的 kubernetes master 節點重複上面的步驟,直到全部機器的 etcd 服務都已啓動。

驗證服務

etcdctl \
  --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health member 9a2ec640d25672e5 is healthy: got healthy result from https://192.168.223.201:2379 member bc6f27ae3be34308 is healthy: got healthy result from https://192.168.223.202:2379 member e5c92ea26c4edba0 is healthy: got healthy result from https://192.168.223.203:2379 cluster is healthy 

結果最後一行爲 cluster is healthy 時表示集羣服務正常。

5、部署 haproxy+keepalived

本次部署一個三節點高可用 haproxy+keepalived 集羣,分別爲:192.168.223.20一、192.168.223.20二、192.168.223.203。VIP 地址 192.168.223.200

安裝 haproxy+keepalived

yum install -y haproxy keepalived 

注: 3臺 haproxy+keepalived 節點都需安裝

配置 keepalived

節點1 192.168.223.201 配置文件 vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived global_defs { notification_email { test@sina.com } notification_email_from admin@test.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_script check_haproxy {  script "/etc/keepalived/check_haproxy.sh" interval 3 } vrrp_instance VI_1 { state MASTER # 若是配置主從,從服務器改成BACKUP便可  interface ens33 virtual_router_id 60 priority 100 # 從服務器設置小於100的數便可 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.223.200/24 } track_script { check_haproxy } } 

節點2 192.168.223.202 配置文件 vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived global_defs { notification_email { test@sina.com } notification_email_from admin@test.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_script check_haproxy {  script "/etc/keepalived/check_haproxy.sh" interval 3 } vrrp_instance VI_1 { state BACKUP # 若是配置主從,從服務器改成BACKUP便可  interface ens33 virtual_router_id 60 priority 90 # 從服務器設置小於100的數便可 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.223.200/24 } track_script { check_haproxy } } 

節點3 192.168.223.203 配置文件 vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived global_defs { notification_email { test@sina.com } notification_email_from admin@test.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_script check_haproxy {  script "/etc/keepalived/check_haproxy.sh" interval 3 } vrrp_instance VI_1 { state BACKUP # 若是配置主從,從服務器改成BACKUP便可  interface ens33 virtual_router_id 60 priority 80 # 從服務器設置小於100的數便可 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.223.200/24 } track_script { check_haproxy } } 

檢測腳本 vi /etc/keepalived/check_haproxy.sh

#!/bin/bash flag=$(systemctl status haproxy &> /dev/null;echo $?) if [[ $flag != 0 ]];then echo "haproxy is down,close the keepalived" systemctl stop keepalived fi 

修改keepalived啓動文件 vi /usr/lib/systemd/system/keepalived.service 如下部分:

[Unit]
Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target haproxy.service Requires=haproxy.service 
  • keepalived配置文件三臺主機基本同樣,除了state,主節點配置爲MASTER,備節點配置BACKUP,優化級參數priority,主節點設置最高,備節點依次遞減
  • 自定義的檢測腳本做用是檢測本機haproxy服務狀態,若是不正常就中止本機keepalived,釋放VIP
  • 這裏沒有考慮keepalived腦裂的問題,後期能夠在腳本中加入相關檢測

配置 haproxy

3臺節點配置如出一轍 配置文件 vim /etc/haproxy/haproxy.cfg

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user haproxy  group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode tcp log global option tcplog option dontlognull option redispatch retries 3 timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s maxconn 3000 listen stats mode http bind :10086 stats enable stats uri /admin?stats stats auth admin:admin stats admin if TRUE frontend k8s_http *:8080 mode tcp maxconn 2000 default_backend http_sri backend http_sri balance roundrobin  server s1 192.168.223.204:8080 check inter 10000 fall 2 rise 2 weight 1  server s2 192.168.223.205:8080 check inter 10000 fall 2 rise 2 weight 1 frontend k8s_https *:6443 mode tcp maxconn 2000 default_backend https_sri backend https_sri balance roundrobin  server s1 192.168.223.204:6443 check inter 10000 fall 2 rise 2 weight 1  server s2 192.168.223.205:6443 check inter 10000 fall 2 rise 2 weight 1 
  • listen stats定義了haproxy自身狀態查看地址,在裏面能夠看到haproy目前的各類狀態
  • frontend 定義了前端提供服務的端口等信息
  • backend 定義了後端真實服務器的信息

啓動 haproxy+keepalived

3個節點都啓動

systemctl daemon-reload
systemctl enable haproxy systemctl enable keepalived systemctl start haproxy systemctl start keepalived 

若是沒有什麼報錯,那應該就能夠在主節點 192.168.223.201 上面看到ens33網卡已綁定VIP: 192.168.223.200

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:2b:74:46 brd ff:ff:ff:ff:ff:ff inet 192.168.223.201/24 brd 192.168.223.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.223.200/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::435e:5e98:6d14:6c40/64 scope link valid_lft forever preferred_lft forever 

6、安裝 flannel 網絡插件

全部的 node 節點都須要安裝網絡插件才能讓全部的Pod加入到同一個局域網中,若是想要在master節點上也能訪問 pods的ip,master 節點也安裝。

安裝 flannel

建議直接使用 yum 安裝 flanneld ,除非對版本有特殊需求,默認安裝的是0.7.1版本的 flannel 。

yum install -y flannel 

service配置文件 vi /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \ $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service 

vi /etc/sysconfig/flanneld 配置文件:

# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://192.168.223.201:2379,https://192.168.223.202:2379,https://192.168.223.203:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem" 

注: 若是是主機是多網卡,則須要在FLANNEL_OPTIONS中增長指定的外網出口的網卡,例如-iface=eth2

在etcd中建立網絡配置

執行下面的命令爲docker分配IP地址段。

etcdctl --endpoints=https://192.168.223.201:2379,https://192.168.223.202:2379,https://192.168.223.203:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kube-centos/network etcdctl --endpoints=https://192.168.223.201:2379,https://192.168.223.202:2379,https://192.168.223.203:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}' 

若是你要使用host-gw模式,能夠直接將vxlan改爲host-gw便可。根據原做者測試,使用host-gw模式時網絡性能好一些。

啓動flannel服務

systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flanneld 

注: 啓動flannel前,請先中止docker,flannel啓動好後,再啓動docker。

如今查詢etcd中的內容能夠看到:

etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kube-centos/network/subnets /kube-centos/network/subnets/172.30.14.0-24 /kube-centos/network/subnets/172.30.38.0-24 /kube-centos/network/subnets/172.30.46.0-24 /kube-centos/network/subnets/172.30.91.0-24 etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/config { "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } } etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/subnets/172.30.14.0-24 {"PublicIP":"192.168.223.204","BackendType":"vxlan","BackendData":{"VtepMAC":"56:27:7d:1c:08:22"}} etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/subnets/172.30.38.0-24 {"PublicIP":"192.168.223.205","BackendType":"vxlan","BackendData":{"VtepMAC":"12:82:83:59:cf:b8"}} etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/subnets/172.30.46.0-24 {"PublicIP":"192.168.223.206","BackendType":"vxlan","BackendData":{"VtepMAC":"e6:b2:fd:f6:66:96"}} etcdctl --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/subnets/172.30.91.0-24 {"PublicIP":"192.168.223.207","BackendType":"vxlan","BackendData":{"VtepMAC":"e3:b1:43:f6:34:67"}} 

若是能夠查看到以上內容證實flannel已經安裝完成,而且已經正常分配kubernetes網段

7、部署master節點

kubernetes master 節點包含的組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能緊密相關;
  • 同時只能有一個 kube-scheduler、kube-controller-manager 進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leader;
  • kube-apiserver 爲無狀態服務,使用haproxy+keepalived 實現高可用

TLS 證書文件

如下pem證書文件咱們在建立TLS證書和祕鑰這一步中已經建立過了,token.csv文件在建立kubeconfig文件的時候建立。咱們再檢查一下。

cd /etc/kubernetes/ssl
ls 
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem 

下載二進制文件

有兩種下載方式,請注意下載對應的Kubernetes版本。

方式一

從 github release 頁面 下載發佈版 tarball,解壓後再執行下載腳本

wget https://github.com/kubernetes/kubernetes/releases/download/v1.6.0/kubernetes.tar.gz tar -xzvf kubernetes.tar.gz cd kubernetes ./cluster/get-kube-binaries.sh 

方式二

從 CHANGELOG頁面 下載 client 或 server tarball 文件 server 的 tarball kubernetes-server-linux-amd64.tar.gz 已經包含了 client(kubectl) 二進制文件,因此不用單獨下載kubernetes-client-linux-amd64.tar.gz文件;

# wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/bin/ chmod +x /usr/bin/kube* 

配置和啓動 kube-apiserver

建立 kube-apiserver的service配置文件

service配置文件 vi /usr/lib/systemd/system/kube-apiserver.service內容:

[Unit]
Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target 

vi /etc/kubernetes/config 文件的內容爲:

### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver #KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080" KUBE_MASTER="--master=http://192.168.223.200:8080" 

注: 該配置文件同時被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。KUBE_MASTER 填寫 VIP 地址

apiserver配置文件 vi /etc/kubernetes/apiserver 內容爲:

### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com" KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0" # ## The port on the local server to listen on. #KUBE_API_PORT="--port=8080" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.223.201:2379,https://192.168.223.202:2379,https://192.168.223.203:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # ## Add your own! KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h" 
  • --experimental-bootstrap-token-auth Bootstrap Token Authentication在1.9版本已經變成了正式feature,參數名稱改成--enable-bootstrap-token-auth
  • 若是中途修改過--service-cluster-ip-range地址,則必須將default命名空間的kubernetes的service給刪除,使用命令:kubectl delete service kubernetes,而後系統會自動用新的ip重建這個service,否則apiserver的log有報錯the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate
  • --authorization-mode=RBAC 指定在安全端口使用 RBAC 受權模式,拒絕未經過受權的請求;
  • kube-scheduler、kube-controller-manager 通常和 kube-apiserver 部署在同一臺機器上,它們使用非安全端口和 kube-apiserver通訊;
  • kubelet、kube-proxy、kubectl 部署在其它 Node 節點上,若是經過安全端口訪問 kube-apiserver,則必須先經過 TLS 證書認證,再經過 RBAC 受權;
  • kube-proxy、kubectl 經過在使用的證書裏指定相關的 User、Group 來達到經過 RBAC 受權的目的;
  • 若是使用了 kubelet TLS Boostrap 機制,則不能再指定 --kubelet-certificate-authority、--kubelet-client-certificate 和 --kubelet-client-key 選項,不然後續 kube-apiserver 校驗 kubelet 證書時出現 」x509: certificate signed by unknown authority「 錯誤;
  • --admission-control 值必須包含 ServiceAccount;
  • runtime-config配置爲rbac.authorization.k8s.io/v1beta1,表示運行時的apiVersion;
  • --service-cluster-ip-range 指定 Service Cluster IP 地址段,該地址段不能路由可達;
  • 缺省狀況下 kubernetes 對象保存在 etcd /registry 路徑下,能夠經過 --etcd-prefix 參數進行調整;
  • 若是須要開通http的無認證的接口,則能夠增長如下兩個參數:--insecure-port=8080 --insecure-bind-address=0.0.0.0。

Kubernetes 1.9不一樣點

  • 對於Kubernetes1.9集羣,須要注意配置KUBE_API_ARGS環境變量中的--authorization-mode=Node,RBAC,增長對Node受權的模式,不然將沒法註冊node。
  • --experimental-bootstrap-token-auth Bootstrap Token Authentication在kubernetes 1.9版本已經廢棄,參數名稱改成--enable-bootstrap-token-auth

啓動kube-apiserver

systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver 

配置和啓動 kube-controller-manager

建立 kube-controller-manager的serivce配置文件

文件路徑 vi /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 

配置文件 vi /etc/kubernetes/controller-manager

### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true" 
  • --service-cluster-ip-range 參數指定 Cluster 中 Service 的CIDR範圍,該網絡在各 Node 間必須路由不可達,必須和 kube-apiserver 中的參數一致;
  • --cluster-signing-* 指定的證書和私鑰文件用來簽名爲 TLS BootStrap 建立的證書和私鑰;
  • --root-ca-file 用來對 kube-apiserver 證書進行校驗,指定該參數後,纔會在Pod 容器的 ServiceAccount 中放置該 CA 證書文件;
  • --address 值必須爲 127.0.0.1,kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器;

啓動 kube-controller-manager

systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager 

咱們啓動每一個組件後能夠經過執行命令 kubectl get componentstatuses,來查看各個組件的狀態;

kubectl get componentstatuses
NAME                 STATUS      MESSAGE              ERROR 
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused   
controller-manager   Healthy     ok
etcd-2               Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} 

注: 目前scheduler未啓動,報錯是正常的

配置和啓動 kube-scheduler

建立 kube-scheduler的serivce配置文件

文件路徑 vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 

配置文件 vi /etc/kubernetes/scheduler

### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1" 
  • --address 值必須爲 127.0.0.1,由於當前 kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器;

啓動 kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler 

驗證 master 節點功能

kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} 

注: 兩個master節點安裝方式與配置同樣

8、部署node節點

Kubernetes node節點包含以下組件:

  • Flanneld:參考我以前寫的文章Kubernetes基於Flannel的網絡配置,以前沒有配置TLS,如今須要在service配置文件中增長TLS配置,安裝過程請參考上一節安裝flannel網絡插件。
  • Docker1.12.5:docker的安裝很簡單,這裏也不說了,可是須要注意docker的配置。
  • kubelet:直接用二進制文件安裝
  • kube-proxy:直接用二進制文件安裝

注意: 每臺 node 上都須要安裝 flannel,master 節點上選裝。

步驟簡介

  1. 確認在上一步中咱們安裝配置的網絡插件flannel已啓動且運行正常
  2. 安裝配置docker後啓動
  3. 安裝配置kubelet、kube-proxy後啓動
  4. 驗證

目錄和文件

咱們再檢查一下三個節點上,通過前幾步操做咱們已經建立了以下的證書和配置文件。

cd  /etc/kubernetes/ssl
ls
admin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  kubernetes-key.pem  kubernetes.pem

ls /etc/kubernetes/
apiserver  bootstrap.kubeconfig  config controller-manager kubelet kube-proxy.kubeconfig  proxy  scheduler ssl token.csv 

安裝配置Docker

若是您使用yum的方式安裝的flannel則不須要執行mk-docker-opts.sh文件這一步,參考Flannel官方文檔中的Docker Integration。

若是你不是使用yum安裝的flannel,那麼須要下載flannel github release中的tar包,解壓後會得到一個 mk-docker-opts.sh 文件,到flannel release頁面下載對應版本的安裝包,該腳本見mk-docker-opts.sh,由於咱們使用yum安裝因此不須要執行這一步。這個文件是用來 Generate Docker daemon options based on flannel env file。 使用systemctl命令啓動flanneld後,會自動執行./mk-docker-opts.sh -i生成以下兩個文件環境變量文件:

  • /run/flannel/subnet.env
FLANNEL_NETWORK=172.30.0.0/16 FLANNEL_SUBNET=172.30.46.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false 
  • /run/docker_opts.env
DOCKER_OPT_BIP="--bip=172.30.46.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1450" 

Docker將會讀取這兩個環境變量文件做爲容器啓動參數。

注意: 安裝docker-ce-17.12.1.ce版本的rpm包時,需給docker.service額外添加$DOCKER_NETWORK_OPTIONS --exec-opt native.cgroupdriver=systemd

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS --exec-opt native.cgroupdriver=systemd 

注意: 不論您用什麼方式安裝的flannel,下面這一步是必不可少的。

yum方式安裝的flannel

修改docker的配置文件 vi /usr/lib/systemd/system/docker.service,增長一條環境變量配置

EnvironmentFile=-/run/flannel/docker 

/run/flannel/docker文件是flannel啓動後自動生成的,其中包含了docker啓動時須要的參數。

二進制方式安裝的flannel

修改docker的配置文件 vi /usr/lib/systemd/system/docker.service,增長以下幾條環境變量配置:

EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/subnet.env 

這兩個文件是mk-docker-opts.sh腳本生成環境變量文件默認的保存位置,docker啓動的時候須要加載這幾個配置文件才能夠加入到flannel建立的虛擬網絡裏。

因此不論您使用何種方式安裝的flannel,將如下配置加入到docker.service中可確保萬無一失。

EnvironmentFile=-/run/flannel/docker EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/subnet.env EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network EnvironmentFile=-/run/docker_opts.env 

docker安裝方式也分yum和rpm包安裝

方式一: yum 安裝

版本1.13.1-53

yum install docker -y 

而後修改配置 vi /etc/sysconfig/docker 中OPTIONS參數以下:

OPTIONS='--log-driver=json-file --signature-verification=false --insecure-registry 192.168.223.208:80' # 附:192.168.223.208:80 爲harbor私有鏡像倉庫 

修改 vi /etc/sysconfig/docker-storage 以下:

DOCKER_STORAGE_OPTIONS="--storage-driver overlay " 

修改docker pull源 vi /etc/docker/daemon.json

{ 
	"registry-mirrors":["https://registry.docker-cn.com"] } 

修改 vi /usr/lib/systemd/system/docker.service:

[Unit]
Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target rhel-push-plugin.socket registries.service Wants=docker-storage-setup.service Requires=docker-cleanup.timer [Service] Type=notify NotifyAccess=all EnvironmentFile=-/run/containers/registries.conf EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/etc/sysconfig/docker-network EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/run/flannel/subnet.env EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/docker Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/bin/dockerd-current \ --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \ --default-runtime=docker-runc \ --exec-opt native.cgroupdriver=systemd \ --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \ --seccomp-profile=/etc/docker/seccomp.json \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY \ $REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal MountFlags=slave KillMode=process [Install] WantedBy=multi-user.target 

方式二: rpm安裝

版本:ce-17.12.1

rpm -ivh docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm 

而後修改配置 vi /etc/sysconfig/docker 中OPTIONS參數以下:

# /etc/sysconfig/docker  # Modify these options if you want to change the way the docker daemon runs OPTIONS='--log-driver=json-file --insecure-registry 192.168.223.208:80' # 附:192.168.223.208:80 爲harbor私有鏡像倉庫,-signature-verification=false選項在此版本已不存在 if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi  # Do not add registries in this file anymore. Use /etc/containers/registries.conf # from the atomic-registries package. #  # On an SELinux system, if you remove the --selinux-enabled option, you # also need to turn on the docker_transition_unconfined boolean. # setsebool -P docker_transition_unconfined 1  # Location used for temporary files, such as those created by # docker load and build operations. Default is /var/lib/docker/tmp # Can be overriden by setting the following environment variable. # DOCKER_TMPDIR=/var/tmp  # Controls the /etc/cron.daily/docker-logrotate cron job status. # To disable, uncomment the line below. # LOGROTATE=false  # docker-latest daemon can be used by starting the docker-latest unitfile. # To use docker-latest client, uncomment below lines #DOCKERBINARY=/usr/bin/docker-latest #DOCKERDBINARY=/usr/bin/dockerd-latest #DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest #DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest 

修改 vi /etc/sysconfig/docker-storage 以下:

DOCKER_STORAGE_OPTIONS="--storage-driver overlay "

修改docker pull源 vi /etc/docker/daemon.json

{ 
	"registry-mirrors":["https://registry.docker-cn.com"] } 

修改 vi /usr/lib/systemd/system/docker.service:

[Unit]
Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/subnet.env EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/docker Environment=GOTRACEBACK=crash ExecStart=/usr/bin/dockerd $OPTIONS \ --exec-opt native.cgroupdriver=systemd \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity MountFlags=slave TimeoutStartSec=1min Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target 

啓動docker

systemctl reload-daemon systemctl enable docker systemctl restart docker systemctl status docker 

注: 重啓了docker後還要重啓kubelet,若是遇到如下問題,kubelet啓動失敗。報錯:

Mar 31 16:44:41 k8s_node1 kubelet[81047]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" 

這是kubelet與docker的cgroup driver不一致致使的,kubelet啓動的時候有個—cgroup-driver參數能夠指定爲"cgroupfs"或者「systemd」。

--cgroup-driver string       Driver that the kubelet uses to manipulate cgroups on the host. Possible values: 'cgroupfs', 'systemd' (default "cgroupfs") 

配置docker的service配置文件 vi /usr/lib/systemd/system/docker.service,設置ExecStart中的 --exec-opt native.cgroupdriver=systemd 再重啓便可。

安裝和配置kubelet

kubernets1.8不一樣點

相對於kubernetes1.6集羣必須進行的配置有: 對於kuberentes1.8集羣,必須關閉swap,不然kubelet啓動將失敗。 修改/etc/fstab將,swap系統註釋掉。

kubelet 啓動時向 kube-apiserver 發送 TLS bootstrapping 請求,須要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper cluster 角色(role), 而後 kubelet 纔能有權限建立認證請求(certificate signing requests):

cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap 
  • --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用戶名,同時也寫入了 /etc/kubernetes/bootstrap.kubeconfig 文件;

下載最新的kubelet和kube-proxy二進制文件

注意請下載對應的Kubernetes版本的安裝包。

wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/bin/ chmod +x /usr/bin/kube* 

建立kubelet的service配置文件

文件位置 vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target 

kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改成你的每臺node節點的IP地址。 注意: 在啓動kubelet以前,須要先手動建立/var/lib/kubelet目錄:mkdir -p /var/lib/kubelet

kubelet的配置文件 vi /etc/kubernetes/kubelet:

kubernetes1.8不一樣點

相對於kubenrete1.6的配置變更:

  • 對於kuberentes1.8集羣中的kubelet配置,取消了KUBELET_API_SERVER的配置,而改用kubeconfig文件來定義master地址,因此請註釋掉KUBELET_API_SERVER配置。
### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.223.206" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.223.206" # ## location of the api-server ## COMMENT THIS ON KUBERNETES 1.8+ KUBELET_API_SERVER="--api-servers=http://192.168.223.200:8080" # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod_infra_container_image=192.168.223.208:80/k8s/pause-amd64:v3.0" # ## Add your own! KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false" 
  • 若是使用systemd方式啓動,則須要額外增長兩個參數--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
  • --experimental-bootstrap-kubeconfig 在1.9版本已經變成了--bootstrap-kubeconfig
  • --address 不能設置爲 127.0.0.1,不然後續 Pods 訪問 kubelet 的 API 接口時會失敗,由於 Pods 訪問的 127.0.0.1 指向本身而不是 kubelet;
  • 若是設置了 --hostname-override 選項,則 kube-proxy 也須要設置該選項,不然會出現找不到 Node 的狀況;
  • "--cgroup-driver 配置成 systemd,不要使用cgroup,不然在 CentOS 系統中 kubelet 將啓動失敗(保持docker和kubelet中的cgroup driver配置一致便可,不必定非使用systemd)。
  • --experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
  • 管理員經過了 CSR 請求後,kubelet 自動在 --cert-dir 目錄建立證書和私鑰文件(kubelet-client.crt 和 kubelet-client.key),而後寫入 --kubeconfig 文件;
  • 建議在 --kubeconfig 配置文件中指定 kube-apiserver 地址,若是未指定 --api-servers 選項,則必須指定 --require-kubeconfig 選項後才從配置文件中讀取 kube-apiserver 的地址,不然 kubelet 啓動後將找不到 kube-apiserver (日誌中提示未找到 API Server),kubectl get nodes 不會返回對應的 Node 信息;
  • --cluster-dns 指定 kubedns 的 Service IP(能夠先分配,後續建立 kubedns 服務時指定該 IP),--cluster-domain 指定域名後綴,這兩個參數同時指定後纔會生效;
  • --cluster-domain 指定 pod 啓動時 /etc/resolve.conf 文件中的 search domain ,起初咱們將其配置成了 cluster.local.,這樣在解析 service 的 DNS 名稱時是正常的,但是在解析 headless service 中的 FQDN pod name 的時候卻錯誤,所以咱們將其修改成 cluster.local,去掉嘴後面的 」點號「 就能夠解決該問題,關於 kubernetes 中的域名/服務名稱解析請參見個人另外一篇文章。
  • --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次啓動kubelet以前並不存在,請看下文,當經過CSR請求後會自動生成kubelet.kubeconfig文件,若是你的節點上已經生成了~/.kube/config文件,你能夠將該文件拷貝到該路徑下,並重命名爲kubelet.kubeconfig,全部node節點能夠共用同一個kubelet.kubeconfig文件,這樣新添加的節點就不須要再建立CSR請求就能自動添加到kubernetes集羣中。一樣,在任意可以訪問到kubernetes集羣的主機上使用kubectl --kubeconfig命令操做集羣時,只要使用~/.kube/config文件就能夠經過權限認證,由於這裏面已經有認證信息並認爲你是admin用戶,對集羣擁有全部權限。
  • KUBELET_POD_INFRA_CONTAINER 是基礎pod鏡像容器,這裏我用的是私有鏡像倉庫地址,你們部署的時候須要修改成本身的鏡像。這裏的pod鏡像可使用:pod-infrastructure 或者 pause 。pod-infrastructure鏡像是Redhat製做的,大小接近80M,下載比較耗時,其實該鏡像並不運行什麼具體進程,推薦使用Google的pause鏡像gcr.io/google_containers/pause-amd64:3.0,這個鏡像只有300多K。

啓動kubelet

systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet 

經過 kublet 的 TLS 證書請求

kubelet 首次啓動時向 kube-apiserver 發送證書籤名請求,必須經過後 kubernetes 系統纔會將該 Node 加入到集羣。

查看未受權的 CSR 請求

kubectl get csr NAME AGE REQUESTOR CONDITION csr-2b308 4m kubelet-bootstrap Pending kubectl get nodes No resources found. 

經過 CSR 請求

kubectl certificate approve csr-2b308 certificatesigningrequest "csr-2b308" approved kubectl get nodes NAME STATUS AGE VERSION 192.168.223.206 Ready 1m v1.6.0 

自動生成了 kubelet kubeconfig 文件和公私鑰

ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2284 Apr  7 02:07 /etc/kubernetes/kubelet.kubeconfig ls -l /etc/kubernetes/ssl/kubelet* -rw-r--r-- 1 root root 1046 Apr  7 02:07 /etc/kubernetes/ssl/kubelet-client.crt -rw------- 1 root root  227 Apr  7 02:04 /etc/kubernetes/ssl/kubelet-client.key -rw-r--r-- 1 root root 1103 Apr  7 02:07 /etc/kubernetes/ssl/kubelet.crt -rw------- 1 root root 1675 Apr  7 02:07 /etc/kubernetes/ssl/kubelet.key 

假如你更新kubernetes的證書,只要沒有更新token.csv,當重啓kubelet後,該node就會自動加入到kuberentes集羣中,而不會從新發送certificaterequest,也不須要在master節點上執行kubectl certificate approve操做。前提是不要刪除node節點上的/etc/kubernetes/ssl/kubelet*和/etc/kubernetes/kubelet.kubeconfig文件。不然kubelet啓動時會提示找不到證書而失敗。

注意: 若是啓動kubelet的時候見到證書相關的報錯,有個trick能夠解決這個問題,能夠將master節點上的~/.kube/config文件(該文件在安裝kubectl命令行工具這一步中將會自動生成)拷貝到node節點的/etc/kubernetes/kubelet.kubeconfig位置,這樣就不須要經過CSR,當kubelet啓動後就會自動加入的集羣中。

配置 kube-proxy

安裝conntrack

yum install -y conntrack-tools 

建立 kube-proxy 的service配置文件
文件路徑 vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 

kube-proxy配置文件 vi /etc/kubernetes/proxy

### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.223.206 --hostname-override=192.168.223.206 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16" 
  • --hostname-override 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 iptables 規則;
  • kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;
  • --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用戶名、證書、祕鑰等請求和認證信息;
  • 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

啓動 kube-proxy

systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy 
  • node2 節點 192.168.223.207 安裝方式同樣,只須要把相應配置文件裏面的IP改成 192.168.223.207 便可
  • 新增節點的話只須要把master節點的證書拷貝到新主機,證書包括: /etc/kubernetes/bootstrap.kubeconfig; /etc/kubernetes/kube-proxy.kubeconfig;/etc/kubernetes/ssl/*.pem ,而後先安裝flanneld,後照本章操做加入集羣便可。

驗證測試

咱們建立一個nginx的service試一下集羣是否可用

kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=192.168.223.208:80/k8s/nginx:v1.9.4  --port=80
deployment "nginx" created

kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed

kubectl describe svc example-service
Name:            example-service
Namespace:        default
Labels:            run=load-balancer-example
Annotations:        <none> Selector: run=load-balancer-example Type: NodePort IP: 10.254.62.207 Port: <unset> 80/TCP NodePort: <unset> 32724/TCP Endpoints: 172.30.60.2:80,172.30.94.2:80 Session Affinity: None Events: <none> curl "10.254.62.207:80" <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 

注意: 此時可能會出現不一樣node節點上面的pod之間網絡不通,解決方法以下

#設置全部節點iptables yum install iptables-services -y; systemctl disable iptables; systemctl stop iptables; modprobe ip_tables; iptables -P FORWARD ACCEPT; 
  • 上面的測試示例中使用的nginx是個人私有鏡像倉庫中的鏡像 192.168.223.208:80/k8s/nginx:v1.9.4,你們在測試過程當中請換成本身的nginx鏡像地址。
  • 10.254.62.207 爲集羣內部地址,只有在安裝了kube-proxy的節點上可以訪問,訪問這個地址時是作了負載均衡的
  • 訪問 192.168.223.206:32724 或 192.168.223.207:32724 均可以獲得nginx的頁面。
  • 刪除此測試服務方法:kubectl delete deployment nginx; kubectl delete svc example-service

至此kubernets1.6.0集羣基礎環境已經安裝完成,後面將安裝一些經常使用插件

9、安裝kubedns插件

該插件須要使用如下官方鏡像:

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 

因爲大中華局域網的緣由,這些鏡像是pull不下來的。全部我這裏使用本身搭建的私有鏡像倉庫

192.168.223.208:80/k8s/k8s-dns-kube-dns-amd64:v1.14.1 192.168.223.208:80/k8s/k8s-dns-dnsmasq-nanny-amd64:v1.14.1 192.168.223.208:80/k8s/k8s-dns-sidecar-amd64:v1.14.1 

須要使用的yaml配置文件

kubedns-cm.yaml kubedns-sa.yaml kubedns-controller.yaml kubedns-svc.yaml 

系統預約義的 RoleBinding

預約義的 RoleBinding system:kube-dns 將 kube-system 命名空間的 kube-dns ServiceAccount 與 system:kube-dns Role 綁定, 該 Role 具備訪問 kube-apiserver DNS 相關 API 的權限;

kubectl get clusterrolebindings system:kube-dns -o yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:  annotations: rbac.authorization.kubernetes.io/autoupdate: "true"  creationTimestamp: 2017-04-11T11:20:42Z  labels: kubernetes.io/bootstrapping: rbac-defaults  name: system:kube-dns  resourceVersion: "58"  selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingssystem%3Akube-dns  uid: e61f4d92-1ea8-11e7-8cd7-f4e9d49f8ed0 roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:kube-dns subjects: - kind: ServiceAccount  name: kube-dns  namespace: kube-system 
  • kubedns-controller.yaml 中定義的 Pods 時使用了 kubedns-sa.yaml 文件定義的 kube-dns ServiceAccount,因此具備訪問 kube-apiserver DNS 相關 API 的權限。

配置 kube-dns ServiceAccount

yaml文件 vi kubedns-sa.yaml

apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile 

yaml文件 vi kubedns-cm.yaml

# Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: ConfigMap metadata:  name: kube-dns  namespace: kube-system  labels: addonmanager.kubernetes.io/mode: EnsureExists 

配置 kube-dns 服務

yaml文件 vi kubedns-controller.yaml

# Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml # in sync with this file. # __MACHINE_GENERATED_WARNING__ apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: kube-dns  namespace: kube-system  labels:  k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  strategy:  rollingUpdate:  maxSurge: 10%  maxUnavailable: 0  selector:  matchLabels:  k8s-app: kube-dns  template:  metadata:  labels:  k8s-app: kube-dns  annotations: scheduler.alpha.kubernetes.io/critical-pod: ''  spec:  tolerations:  - key: "CriticalAddonsOnly"  operator: "Exists"  volumes:  - name: kube-dns-config  configMap:  name: kube-dns  optional: true  containers:  - name: kubedns  image: 192.168.223.208:80/k8s/k8s-dns-kube-dns-amd64:v1.14.1  resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it.  limits:  memory: 170Mi  requests:  cpu: 100m  memory: 70Mi  livenessProbe:  httpGet:  path: /healthcheck/kubedns  port: 10054  scheme: HTTP  initialDelaySeconds: 60  timeoutSeconds: 5  successThreshold: 1  failureThreshold: 5  readinessProbe:  httpGet:  path: /readiness  port: 8081  scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available.  initialDelaySeconds: 3  timeoutSeconds: 5  args:  - --domain=cluster.local.  - --dns-port=10053  - --config-dir=/kube-dns-config  - --v=2 #__PILLAR__FEDERATIONS__DOMAIN__MAP__  env:  - name: PROMETHEUS_PORT  value: "10055"  ports:  - containerPort: 10053  name: dns-local  protocol: UDP  - containerPort: 10053  name: dns-tcp-local  protocol: TCP  - containerPort: 10055  name: metrics  protocol: TCP  volumeMounts:  - name: kube-dns-config  mountPath: /kube-dns-config  - name: dnsmasq  image: 192.168.223.208:80/k8s/k8s-dns-dnsmasq-nanny-amd64:v1.14.1  livenessProbe:  httpGet:  path: /healthcheck/dnsmasq  port: 10054  scheme: HTTP  initialDelaySeconds: 60  timeoutSeconds: 5  successThreshold: 1  failureThreshold: 5  args:  - -v=2  - -logtostderr  - -configDir=/etc/k8s/dns/dnsmasq-nanny  - -restartDnsmasq=true  - --  - -k  - --cache-size=1000  - --log-facility=-  - --server=/cluster.local./127.0.0.1#10053  - --server=/in-addr.arpa/127.0.0.1#10053  - --server=/ip6.arpa/127.0.0.1#10053  ports:  - containerPort: 53  name: dns  protocol: UDP  - containerPort: 53  name: dns-tcp  protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details  resources:  requests:  cpu: 150m  memory: 20Mi  volumeMounts:  - name: kube-dns-config  mountPath: /etc/k8s/dns/dnsmasq-nanny  - name: sidecar  image: 192.168.223.208:80/k8s/k8s-dns-sidecar-amd64:v1.14.1  livenessProbe:  httpGet:  path: /metrics  port: 10054  scheme: HTTP  initialDelaySeconds: 60  timeoutSeconds: 5  successThreshold: 1  failureThreshold: 5  args:  - --v=2  - --logtostderr  - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A  - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A  ports:  - containerPort: 10054  name: metrics  protocol: TCP  resources:  requests:  memory: 20Mi  cpu: 10m  dnsPolicy: Default # Don't use cluster DNS.  serviceAccountName: kube-dns 
  • spec.clusterIP = 10.254.0.2,即明確指定了 kube-dns Service IP,這個 IP 須要和 kubelet 的 --cluster-dns 參數值一致;

配置 kube-dns Deployment

yaml文件 vi kubedns-svc.yaml

# Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # __MACHINE_GENERATED_WARNING__ apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.254.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP 
  • 使用系統已經作了 RoleBinding 的 kube-dns ServiceAccount,該帳戶具備訪問 kube-apiserver DNS 相關 API 的權限;

執行全部定義文件

ls *.yaml kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml kubectl create -f . 

檢查 kubedns 功能

新建一個 Deployment

cat > my-nginx.yaml << EOF apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: my-nginx spec:  replicas: 2  template:  metadata:  labels:  run: my-nginx  spec:  containers:  - name: my-nginx  image: 192.168.223.208:80/k8s/nginx:v1.9.4  ports:  - containerPort: 80 EOF kubectl create -f my-nginx.yaml 

Export 該 Deployment, 生成 my-nginx 服務

kubectl expose deploy my-nginx

kubectl get services --all-namespaces |grep my-nginx default my-nginx 10.254.179.239 <none> 80/TCP 42m 

進入kubernete生成的 my-nginx 服務的pods中

kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-1108742923-1bpml 1/1 Running 0 1m my-nginx-1108742923-44dp8 1/1 Running 0 1m kubectl exec -it my-nginx-1108742923-1bpml /bin/bash root@my-nginx-1108742923-1bpml:~# cat /etc/resolv.conf nameserver 10.254.0.2 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 root@my-nginx-1108742923-1bpml:~# ping my-nginx  PING my-nginx.default.svc.cluster.local (10.254.54.162): 56 data bytes ^C--- my-nginx.default.svc.cluster.local ping statistics --- 11 packets transmitted, 0 packets received, 100% packet loss root@my-nginx-1108742923-1bpml:~# root@my-nginx-1108742923-1bpml:~# ping kubernetes PING kubernetes.default.svc.cluster.local (10.254.0.1): 56 data bytes ^C--- kubernetes.default.svc.cluster.local ping statistics --- 5 packets transmitted, 0 packets received, 100% packet loss root@my-nginx-1108742923-1bpml:~# root@my-nginx-1108742923-1bpml:~# ping kube-dns.kube-system.svc.cluster.local PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 56 data bytes ^C--- kube-dns.kube-system.svc.cluster.local ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss root@my-nginx-1108742923-1bpml:~# 

從結果來看,service名稱能夠正常解析。

注意: 直接ping ClusterIP是ping不通的,ClusterIP是根據IPtables路由到服務的endpoint上,只有結合ClusterIP加端口才能訪問到對應的服務。

10、安裝dashboard插件

注意:本文檔中安裝的是kubernetes dashboard v1.6.0 安裝dashboard時,遇到一個問題,若是直接安裝v1.6.3版本的話,後面安裝的Heapster插件如今 的 CPU、內存等 metric 圖形等功能不能夠;若是先安裝v1.6.0再裝Heapster 插件,最後再升級爲v1.6.3版本就沒有此問題

官方文件目錄:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

須要使用的鏡像:

192.168.223.208:80/k8s/kubernetes-dashboard-amd64:v1.6.0 

咱們使用的yaml文件以下:

ls *.yaml dashboard-controller.yaml dashboard-service.yaml dashboard-rbac.yaml 

配置 dashboard ServiceAccount

文件 vi dashboard-rbac.yaml

apiVersion: v1 kind: ServiceAccount metadata:  name: dashboard  namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:  name: dashboard subjects:  - kind: ServiceAccount  name: dashboard  namespace: kube-system roleRef:  kind: ClusterRole  name: cluster-admin  apiGroup: rbac.authorization.k8s.io 

配置dashboard-controller

文件 vi dashboard-controller.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: dashboard containers: - name: kubernetes-dashboard image: 192.168.223.208:80/k8s/kubernetes-dashboard-amd64:v1.6.0 resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi ports: - containerPort: 9090 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 tolerations: - key: "CriticalAddonsOnly" operator: "Exists" 

配置dashboard-service

文件 vi dashboard-service.yaml

apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: type: NodePort selector: k8s-app: kubernetes-dashboard ports: - port: 80 targetPort: 9090 
  • 指定端口類型爲 NodePort,這樣外界能夠經過地址 nodeIP:nodePort 訪問 dashboard;

執行全部定義文件

kubectl create -f  . service "kubernetes-dashboard" created deployment "kubernetes-dashboard" created 

檢查執行結果

查看分配的 NodePort

kubectl get services kubernetes-dashboard -n kube-system
NAME                   CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.254.224.130 <nodes> 80:30312/TCP 25s 
  • NodePort 30312映射到 dashboard pod 80端口;

檢查 controller

kubectl get deployment kubernetes-dashboard  -n kube-system
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1 1 1 1 3m kubectl get pods -n kube-system | grep dashboard kubernetes-dashboard-1339745653-pmn6z 1/1 Running 0 4m 

訪問dashboard

有如下三種方式:

  • kubernetes-dashboard 服務暴露了 NodePort,可使用 http://NodeIP:nodePort 地址訪問 dashboard
  • 經過 API server 訪問 dashboard(https 6443端口和http 8080端口方式)
  • 經過 kubectl proxy 訪問 dashboard

經過 kubectl proxy 訪問 dashboard

啓動代理

kubectl proxy --address='192.168.223.204' --port=8086 --accept-hosts='^*$' Starting to serve on 192.168.223.204:8086 
  • 須要指定 --accept-hosts 選項,不然瀏覽器訪問 dashboard 頁面時提示 「Unauthorized」;

瀏覽器訪問 URL:http://192.168.223.204:8086/ui 自動跳轉到:http://192.168.223.204:8086/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default

經過 API server 訪問dashboard

獲取集羣服務地址列表

kubectl cluster-info
Kubernetes master is running at https://192.168.223.200:6443 KubeDNS is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard 

瀏覽器訪問 URL:https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard(瀏覽器會提示證書驗證,由於經過加密通道,以改方式訪問的話,須要提早導入證書到你的計算機中)。這是我當時在這遇到的坑:經過 kube-apiserver 訪問dashboard,提示User "system:anonymous" cannot proxy services in the namespace "kube-system". #5,解決方法以下: 導入證書 將生成的admin.pem證書轉換格式

cd /etc/kubernetes/ssl
openssl pkcs12 -export -in admin.pem -out admin.p12 -inkey admin-key.pem 

將生成的admin.p12證書導入的你的電腦便可,導出的時候記住你設置的密碼,導入的時候還要用到。

若是你不想使用https的話,能夠直接訪問insecure port 8080端口:http://192.168.223.200:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

因爲缺乏 Heapster 插件,當前 dashboard 不能展現 Pod、Nodes 的 CPU、內存等 metric 圖形。

更新dashboard到v1.6.3

Kubernetes 1.6 版本的 dashboard 的鏡像已經到了 v1.6.3 版本,咱們可使用下面的方式更新。 修改 dashboard-controller.yaml 文件中的鏡像的版本將 v1.6.0 更改成 v1.6.3。

image: 192.168.223.208:80/k8s/kubernetes-dashboard-amd64:v1.6.3 

而後執行下面的命令:

kubectl apply -f dashboard-controller.yaml 

監聽 dashboard Pod 的狀態能夠看到:

kubectl get pods --all-namespaces|grep dashboard
kubernetes-dashboard-215087767-2jsgd 0/1 Pending 0 0s kubernetes-dashboard-3966630548-0jj1j 1/1 Terminating 0 1d kubernetes-dashboard-215087767-2jsgd 0/1 Pending 0 0s kubernetes-dashboard-3966630548-0jj1j 1/1 Terminating 0 1d kubernetes-dashboard-215087767-2jsgd 0/1 ContainerCreating 0 0s kubernetes-dashboard-3966630548-0jj1j 0/1 Terminating 0 1d kubernetes-dashboard-3966630548-0jj1j 0/1 Terminating 0 1d kubernetes-dashboard-215087767-2jsgd 1/1 Running 0 6s kubernetes-dashboard-3966630548-0jj1j 0/1 Terminating 0 1d kubernetes-dashboard-3966630548-0jj1j 0/1 Terminating 0 1d kubernetes-dashboard-3966630548-0jj1j 0/1 Terminating 0 1d 

新的 Pod 的啓動了,舊的 Pod 被終結了。

Dashboard 的訪問地址不變,從新訪問 http://192.168.223.200:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard, 能夠看到新版的界面支持中文了。 新版本中最大的變化是增長了進入容器內部的入口(相似ssh終端),能夠在頁面上進入到容器內部操做,同時又增長了一個搜索框。

11、安裝heapster插件

該插件須要使用如下鏡像:

192.168.223.208:80/k8s/heapster-amd64:vv1.4.3 192.168.223.208:80/k8s/heapster-influxdb-amd64:v1.1.1 192.168.223.208:80/k8s/heapster-grafana-amd64:v4.0.2 

須要的yaml文件

ls *.yaml grafana-deployment.yaml grafana-service.yaml heapster-deployment.yaml heapster-rbac.yaml heapster-service.yaml influxdb-cm.yaml influxdb-deployment.yaml influxdb-service.yaml 

配置 heapster-deployment

文件 vi heapster-rbac.yaml

apiVersion: v1 kind: ServiceAccount metadata:  name: heapster  namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:  name: heapster subjects:  - kind: ServiceAccount  name: heapster  namespace: kube-system roleRef:  kind: ClusterRole  name: cluster-admin  apiGroup: rbac.authorization.k8s.io 

文件 vi heapster-deployment.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: heapster  namespace: kube-system spec:  replicas: 1  template:  metadata:  labels:  task: monitoring k8s-app: heapster  spec:  serviceAccountName: heapster  containers: - name: heapster  image: 192.168.223.208:80/k8s/heapster-amd64:v1.4.3  imagePullPolicy: IfNotPresent  command: - /heapster - --source=kubernetes:https://kubernetes.default - --sink=influxdb:http://monitoring-influxdb:8086 

文件 vi heapster-service.yaml

apiVersion: v1 kind: Service metadata:  labels:  task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster  name: heapster  namespace: kube-system spec:  ports:  - port: 80  targetPort: 8082  selector:  k8s-app: heapster  k8s-app: heapster 

配置 grafana-deployment

文件 vi grafana-deployment.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: monitoring-grafana  namespace: kube-system spec:  replicas: 1  template:  metadata:  labels:  task: monitoring  k8s-app: grafana  spec:  containers:  - name: grafana  image: 192.168.223.208:80/k8s/heapster-grafana-amd64:v4.0.2  ports:  - containerPort: 3000  protocol: TCP  volumeMounts:  - mountPath: /var  name: grafana-storage  env:  - name: INFLUXDB_HOST  value: monitoring-influxdb  - name: GRAFANA_PORT  value: "3000" # The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend # removing these env variables, setup auth for grafana, and expose the grafana # service using a LoadBalancer or a public IP.  - name: GF_AUTH_BASIC_ENABLED  value: "false"  - name: GF_AUTH_ANONYMOUS_ENABLED  value: "true"  - name: GF_AUTH_ANONYMOUS_ORG_ROLE  value: Admin  - name: GF_SERVER_ROOT_URL # If you're only using the API Server proxy, set this value instead:  value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ #value: /  volumes:  - name: grafana-storage  emptyDir: {} 
  • 若是後續使用 kube-apiserver 或者 kubectl proxy 訪問 grafana dashboard,則必須將 GF_SERVER_ROOT_URL 設置爲 /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/,不然後續訪問grafana時訪問時提示找不到http://192.168.223.200:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/api/dashboards/home 頁面;

文件 vi grafana-service.yaml

apiVersion: v1 kind: Service metadata:  labels: # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-grafana  name: monitoring-grafana  namespace: kube-system spec: # In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP. # type: LoadBalancer # You could also use NodePort to expose the service at a randomly-generated port  ports:  - port : 80  targetPort: 3000  selector:  k8s-app: grafana 

配置 influxdb-deployment

文件 vi influxdb-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: influxdb-config
  namespace: kube-system
data:
  config.toml: |
    reporting-disabled = true bind-address = ":8088" [meta] dir = "/data/meta" retention-autocreate = true logging-enabled = true [data] dir = "/data/data" wal-dir = "/data/wal" query-log-enabled = true cache-max-memory-size = 1073741824 cache-snapshot-memory-size = 26214400 cache-snapshot-write-cold-duration = "10m0s" compact-full-write-cold-duration = "4h0m0s" max-series-per-database = 1000000 max-values-per-tag = 100000 trace-logging-enabled = false [coordinator] write-timeout = "10s" max-concurrent-queries = 0 query-timeout = "0s" log-queries-after = "0s" max-select-point = 0 max-select-series = 0 max-select-buckets = 0 [retention] enabled = true check-interval = "30m0s" [admin] enabled = true bind-address = ":8083" https-enabled = false https-certificate = "/etc/ssl/influxdb.pem" [shard-precreation] enabled = true check-interval = "10m0s" advance-period = "30m0s" [monitor] store-enabled = true store-database = "_internal" store-interval = "10s" [subscriber] enabled = true http-timeout = "30s" insecure-skip-verify = false ca-certs = "" write-concurrency = 40 write-buffer-size = 1000 [http] enabled = true bind-address = ":8086" auth-enabled = false log-enabled = true write-tracing = false pprof-enabled = false https-enabled = false https-certificate = "/etc/ssl/influxdb.pem" https-private-key = "" max-row-limit = 10000 max-connection-limit = 0 shared-secret = "" realm = "InfluxDB" unix-socket-enabled = false bind-socket = "/var/run/influxdb.sock" [[graphite]] enabled = false bind-address = ":2003" database = "graphite" retention-policy = "" protocol = "tcp" batch-size = 5000 batch-pending = 10 batch-timeout = "1s" consistency-level = "one" separator = "." udp-read-buffer = 0 [[collectd]] enabled = false bind-address = ":25826" database = "collectd" retention-policy = "" batch-size = 5000 batch-pending = 10 batch-timeout = "10s" read-buffer = 0 typesdb = "/usr/share/collectd/types.db" [[opentsdb]] enabled = false bind-address = ":4242" database = "opentsdb" retention-policy = "" consistency-level = "one" tls-enabled = false certificate = "/etc/ssl/influxdb.pem" batch-size = 1000 batch-pending = 5 batch-timeout = "1s" log-point-errors = true [[udp]] enabled = false bind-address = ":8089" database = "udp" retention-policy = "" batch-size = 5000 batch-pending = 10 read-buffer = 0 batch-timeout = "1s" precision = "" [continuous_queries] log-enabled = true enabled = true run-interval = "1s" 

文件 vi influxdb-deployment.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: monitoring-influxdb  namespace: kube-system spec:  replicas: 1  template:  metadata:  labels:  task: monitoring  k8s-app: influxdb  spec:  containers:  - name: influxdb  image: 192.168.223.208:80/k8s/heapster-influxdb-amd64:v1.1.1  volumeMounts:  - mountPath: /data  name: influxdb-storage  - mountPath: /etc/  name: influxdb-config  volumes:  - name: influxdb-storage  emptyDir: {}  - name: influxdb-config  configMap:  name: influxdb-config 

文件 vi influxdb-service.yaml

apiVersion: v1 kind: Service metadata:  labels:  task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-influxdb  name: monitoring-influxdb  namespace: kube-system spec:  type: NodePort  ports:  - port: 8086  targetPort: 8086  name: http  - port: 8083  targetPort: 8083  name: admin  selector:  k8s-app: influxdb 
  • 定義端口類型爲 NodePort,額外增長了 admin 端口映射,用於後續瀏覽器訪問 influxdb 的 admin UI 界面;

執行全部定義文件

ls *.yaml
grafana-service.yaml heapster-rbac.yaml influxdb-cm.yaml influxdb-service.yaml grafana-deployment.yaml  heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml

kubectl create -f  .
deployment "monitoring-grafana" created service "monitoring-grafana" created deployment "heapster" created serviceaccount "heapster" created clusterrolebinding "heapster" created service "heapster" created configmap "influxdb-config" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created 

檢查執行結果

檢查 Deployment

kubectl get deployments -n kube-system | grep -E 'heapster|monitoring'
heapster               1 1 1 1 2m monitoring-grafana 1 1 1 1 2m monitoring-influxdb 1 1 1 1 2m 

檢查 Pods

kubectl get pods -n kube-system | grep -E 'heapster|monitoring'
heapster-110704576-gpg8v 1/1 Running 0 2m monitoring-grafana-2861879979-9z89f 1/1 Running 0 2m monitoring-influxdb-1411048194-lzrpc 1/1 Running 0 2m 

此時檢查 kubernets dashboard 界面,就能夠顯示各 Nodes、Pods 的 CPU、內存、負載等利用率曲線圖了;

訪問 grafana

  1. 經過 kube-apiserver 訪問: 獲取 monitoring-grafana 服務 URL
kubectl cluster-info
Kubernetes master is running at https://192.168.223.200:6443 Heapster is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 

瀏覽器訪問 URL: http://192.168.223.200:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana

  1. 經過 kube-apiserver 訪問: 建立代理
kubectl proxy --address='192.168.223.204' --port=8086 --accept-hosts='^*$' Starting to serve on 192.168.223.204:8086 

瀏覽器訪問 URL:http://192.168.223.204:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana

注意 在安裝好 Grafana 以後咱們使用的是默認的 template 配置,頁面上的 namespace 選擇裏只有 default 和 kube-system,並非說其餘的 namespace 裏的指標沒有獲得監控,只是咱們沒有在 Grafana 中開啓他它們的顯示而已。將 Templating 中的 namespace 的 Data source 設置爲 influxdb-datasource,Refresh 設置爲 on Dashboard Load 保存設置,刷新瀏覽器,便可看到其餘 namespace 選項。

訪問 influxdb admin UI

獲取 influxdb http 8086 映射的 NodePort

kubectl get svc -n kube-system|grep influxdb monitoring-influxdb 10.254.22.46 <nodes> 8086:32299/TCP,8083:30269/TCP 9m 

經過 kube-apiserver 的非安全端口訪問 influxdb 的 admin UI 界面: http://192.168.223.200:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:8083/ 在頁面的 「Connection Settings」 的 Host 中輸入 node IP, Port 中輸入 8086 映射的 nodePort 如上面的 32299,點擊 「Save」 便可(個人集羣中的地址是192.168.223.206:32299): Quary 框中輸入 show stats 查看基本信息

12、安裝EFK插件

咱們經過在每臺node上部署一個以DaemonSet方式運行的fluentd來收集每臺node上的日誌。Fluentd將docker日誌目錄/var/lib/docker/containers和/var/log目錄掛載到Pod中,而後Pod會在node節點的/var/log/pods目錄中建立新的目錄,能夠區別不一樣的容器日誌輸出,該目錄下有一個日誌文件連接到/var/lib/docker/contianers目錄下的容器日誌輸出。

該插件須要使用如下鏡像:

192.168.223.208:80/k8s/elasticsearch:v2.4.1 192.168.223.208:80/k8s/fluentd-elasticsearch:v1.22 192.168.223.208:80/k8s/kibana:v4.6.1 

須要使用的yaml配置文件

ls *.yaml efk-rbac.yaml es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml 

配置 es

文件 vi efk-rbac.yaml

apiVersion: v1 kind: ServiceAccount metadata:  name: efk  namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:  name: efk subjects:  - kind: ServiceAccount  name: efk  namespace: kube-system roleRef:  kind: ClusterRole  name: cluster-admin  apiGroup: rbac.authorization.k8s.io 

文件 vi es-controller.yaml

apiVersion: v1 kind: ReplicationController metadata: name: elasticsearch-logging-v1 namespace: kube-system labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: replicas: 2 selector: k8s-app: elasticsearch-logging version: v1 template: metadata: labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: serviceAccountName: efk containers: - image: 192.168.223.208:80/k8s/elasticsearch:v2.4.1 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: es-persistent-storage mountPath: /data env: - name: "NAMESPACE" valueFrom: fieldRef: fieldPath: metadata.namespace volumes: - name: es-persistent-storage emptyDir: {} 

文件 vi es-service.yaml

apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging 

配置 fluentd-es

文件 vi fluentd-es-ds.yaml

apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-es-v1.22 namespace: kube-system labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile version: v1.22 spec: template: metadata: labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" version: v1.22 # This annotation ensures that fluentd does not get evicted if the node # supports critical pod annotation based priority scheme. # Note that this does not guarantee admission on the nodes (#40573). annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: efk containers: - name: fluentd-es image: 192.168.223.208:80/k8s/fluentd-elasticsearch:v1.22 command: - '/bin/sh' - '-c' - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log' resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true nodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true" tolerations: - key : "node.alpha.kubernetes.io/ismaster" effect: "NoSchedule" terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers 

配置 kibana

文件 vi kibana-controller.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: kibana-logging  namespace: kube-system  labels:  k8s-app: kibana-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec:  replicas: 1  selector:  matchLabels:  k8s-app: kibana-logging  template:  metadata:  labels:  k8s-app: kibana-logging  spec:  serviceAccountName: efk  containers:  - name: kibana-logging  image: 192.168.223.208:80/k8s/kibana:v4.6.1  resources: # keep request = limit to keep this container in guaranteed class  limits:  cpu: 100m  requests:  cpu: 100m  env:  - name: "ELASTICSEARCH_URL"  value: "http://elasticsearch-logging:9200"  - name: "KIBANA_BASE_URL"  value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"  ports:  - containerPort: 5601  name: ui  protocol: TCP 

文件 vi kibana-service.yaml

apiVersion: v1 kind: Service metadata: name: kibana-logging namespace: kube-system labels: k8s-app: kibana-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Kibana" spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana-logging 

給 Node 設置標籤

定義 DaemonSet fluentd-es-v1.22 時設置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,因此須要在指望運行 fluentd 的 Node 上設置該標籤;

kubectl get nodes
NAME        STATUS    AGE       VERSION
192.168.223.206 Ready 1d v1.6.0 192.168.223.207 Ready 1d v1.6.0 kubectl label nodes 192.168.223.206 beta.kubernetes.io/fluentd-ds-ready=true node "192.168.223.206" labeled kubectl label nodes 192.168.223.207 beta.kubernetes.io/fluentd-ds-ready=true node "192.168.223.207" labeled 

執行定義文件

kubectl create -f .
serviceaccount "efk" created clusterrolebinding "efk" created replicationcontroller "elasticsearch-logging-v1" created service "elasticsearch-logging" created daemonset "fluentd-es-v1.22" created deployment "kibana-logging" created service "kibana-logging" created 

檢查執行結果

kubectl get deployment -n kube-system|grep kibana
kibana-logging         1 1 1 1 2m kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana' elasticsearch-logging-v1-mlstp 1/1 Running 0 1m elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m fluentd-es-v1.22-31sm0 1/1 Running 0 1m fluentd-es-v1.22-bpgqs 1/1 Running 0 1m fluentd-es-v1.22-qmn7h 1/1 Running 0 1m kibana-logging-1432287342-0gdng 1/1 Running 0 1m kubectl get service -n kube-system|grep -E 'elasticsearch|kibana' elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m kibana-logging 10.254.8.113 <none> 5601/TCP 2m 

kibana Pod 第一次啓動時會用較長時間(10-20分鐘)來優化和 Cache 狀態頁面,能夠 tailf 該 Pod 的日誌觀察進度:

kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f ELASTICSEARCH_URL=http://elasticsearch-logging:9200 server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging {"type":"log","@timestamp":"2017-04-12T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"} {"type":"log","@timestamp":"2017-04-12T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-04-12T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"} 

訪問 kibana

  1. 經過 kube-apiserver 訪問: 獲取 monitoring-grafana 服務 URL
kubectl cluster-info
 Kubernetes master is running at https://192.168.223.200:6443 Elasticsearch is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging Heapster is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/heapster Kibana is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb 

瀏覽器訪問 URL: https://192.168.223.200:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana 或者非安全鏈接: http://192.168.223.200:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana

  1. 經過 kubectl proxy 訪問: 建立代理
kubectl proxy --address='192.168.223.204' --port=8086 --accept-hosts='^*$' Starting to serve on 192.168.223.204:8086 

瀏覽器訪問 URL:http://192.168.223.204:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

在 Settings -> Indices 頁面建立一個 index(至關於 mysql 中的一個 database),選中 Index contains time-based events,使用默認的 logstash-* pattern,點擊 Create ; 建立Index後,能夠在 Discover 下看到 ElasticSearch logging 中匯聚的日誌;

可能遇到的問題 若是你在這裏發現Create按鈕是灰色的沒法點擊,且Time-filed name中沒有選項,fluentd要讀取/var/log/containers/目錄下的log日誌,這些日誌是從/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log連接過來的,查看你的docker配置,—log-dirver須要設置爲json-file格式,默認的多是journald。

相關文章
相關標籤/搜索