Kubernetes之(十九)資源指標和集羣監控

Kubernetes之(十九)資源指標和集羣監控

資源指標和資源監控

一個集羣系統管理離不開監控,一樣的Kubernetes也須要根據數據指標來採集相關數據,從而完成對集羣系統的監控情況進行監測。這些指標整體上分爲兩個組成:監控集羣自己和監控Pod對象,一般一個集羣的衡量性指標包括如下幾個部分:mysql

  • 節點資源狀態:主要包括網絡帶寬、磁盤空間、CPU和內存使用率
  • 節點的數量:即時性瞭解集羣的可用節點數量能夠爲用戶計算服務器使用的費用支出提供參考。
  • 運行的Pod對象:正在運行的Pod對象數量能夠評估可用節點數量是否足夠,以及節點故障時是否能平衡負載。

另外一個方面,對Pod資源對象的監控需求大概有如下三類:linux

  • Kubernetes指標:監測特定應用程序相關的Pod對象的部署過程、副本數量、狀態信息、健康狀態、網絡等等。
  • 容器指標:容器的資源需求、資源限制、CPU、內存、磁盤空間、網絡帶寬的實際佔用狀況。
  • 應用程序指標:應用程序自身的內建指標,和業務規則相關

metrics-server

在新一代的Kubernetes指標監控體系當中主要由核心指標流水線和監控指標流水線組成:git

  • 核心指標流水線:是指由kubelet、、metrics-server以及由API server提供的api組成,它們能夠爲K8S系統提供核心指標,從而瞭解並操做集羣內部組件和程序。其中相關的指標包括CPU的累積使用率、內存實時使用率,Pod資源佔用率以及容器磁盤佔用率等等。其中核心指標的獲取原先是由heapster進行收集,可是在1.11版本以後已經被廢棄,從而由新一代的metrics-server所代替對核心指標的匯聚。核心指標的收集是必要的。以下圖:
    github

  • 監控指標流水線:用於從系統收集各類指標數據並提供給終端用戶、存儲系統以及HPA。它們包含核心指標以及許多非核心指標,其中因爲非核心指標自己不能被Kubernetes所解析,此時就須要依賴於用戶選擇第三方解決方案。以下圖:web

一個能夠同時使用資源指標API和自定義指標API的組件是HPAv2,其實現了經過觀察指標實現自動擴容和縮容。而目前資源指標API的實現主流是metrics-server。sql

自1.8版本後,容器的cpu和內存資源佔用利用率均可以經過客戶端指標API直接調用,從而獲取資源使用狀況,要知道的是API自己並不存儲任何指標數據,僅僅提供資源佔用率的實時監測數據。數據庫

資源指標和其餘的API指標並無啥區別,它是經過API Server的URL路徑/apis/metrics.k8s.io/進行存取,只有在k8s集羣內部署了metrics-server應用才能只用API,其簡單的結構圖以下:

MetricsServer基於內存存儲,重啓後數據將所有丟失,並且它僅能留存最近收集到的指標數據,所以,若是用戶指望訪問歷史數據,就不得不借助於第三方的監控系統(如Prometheus等)。vim

通常說來,MetricsServer在每一個集羣中僅會運行一個實例,啓動時,它將自動初始化與各節點的鏈接,所以出於安全方面的考慮,它須要運行於普通節點而非Master主機之上。直接使用項目自己提供的資源配置清單即能輕鬆完成metrics-server的部署。後端

部署metrics-server

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
下載yaml文件

[root@master metrics-server]# for n in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/metrics-server/$n;done

[root@master metrics-server]# ll
總用量 24
-rw-r--r-- 1 root root  398 4月  10 10:31 auth-delegator.yaml
-rw-r--r-- 1 root root  419 4月  10 10:31 auth-reader.yaml
-rw-r--r-- 1 root root  393 4月  10 10:32 metrics-apiservice.yaml
-rw-r--r-- 1 root root 3156 4月  10 10:32 metrics-server-deployment.yaml
-rw-r--r-- 1 root root  336 4月  10 10:32 metrics-server-service.yaml
-rw-r--r-- 1 root root  801 4月  10 10:32 resource-reader.yaml

部署

#因爲鏡像及部分設置問題,修改下面這個文件的部份內容
#metrics-server容器修改鏡像地址和command字段,metrics-server-nanny容器中的cpu和內存值
[root@master metrics-server]# vim metrics-server-deployment.yaml 
......
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        #image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        image: xiaobai20201/metrics-server:v0.3.1
      - name: metrics-server
        #image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        image: xiaobai20201/metrics-server:v0.3.1
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        #image: k8s.gcr.io/addon-resizer:1.8.4
        image: xiaobai20201/addon-resizer:1.8.4
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu=100m
          - --extra-cpu=0.5m
          - --memory=100Mi
          - --extra-memory=50Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.1
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=10
          
[root@master metrics-server]# vim resource-reader.yaml 
#因爲啓動容器還須要權限獲取數據,須要在resource-reader.yaml文件中增長nodes/stats

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch

部署

[root@master metrics-server]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
configmap/metrics-server-config created
deployment.apps/metrics-server-v0.3.1 created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

[root@master metrics-server]# kubectl api-versions |grep metrics
metrics.k8s.io/v1beta1

#檢查資源指標API的可用性
[root@master metrics-server]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}
#部署成功後可使用kubectl proxy --port=8080來代理出一個端口
[root@master metrics-server]# kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
#使用curl命令能夠從api接口查看節點等狀態
[root@master mainfest]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes",
      "singularName": "",
      "namespaced": false,
      "kind": "NodeMetrics",
      "verbs": [
        "get",
        "list"
      ]
    },
    {
      "name": "pods",
      "singularName": "",
      "namespaced": true,
      "kind": "PodMetrics",
      "verbs": [
        "get",
        "list"
      ]
    }
  ]
}

#該組內主要提供nodes和pods的數據

[root@master mainfest]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "node02",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node02",
        "creationTimestamp": "2019-04-10T02:57:21Z"
      },
      "timestamp": "2019-04-10T02:57:14Z",
      "window": "30s",
      "usage": {
        "cpu": "41332743n",
        "memory": "702124Ki"
      }
    },
    {
      "metadata": {
        "name": "master",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master",
        "creationTimestamp": "2019-04-10T02:57:21Z"
      },
      "timestamp": "2019-04-10T02:57:15Z",
      "window": "30s",
      "usage": {
        "cpu": "156316878n",
        "memory": "1209616Ki"
      }
    },
    {
      "metadata": {
        "name": "node01",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node01",
        "creationTimestamp": "2019-04-10T02:57:21Z"
      },
      "timestamp": "2019-04-10T02:57:09Z",
      "window": "30s",
      "usage": {
        "cpu": "47843790n",
        "memory": "800144Ki"
      }
    }
  ]
}

下面使用kubectl top命令進行查看資源信息:

[root@master metrics-server]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   146m         7%     1187Mi          68%       
node01   45m          4%     782Mi           45%       
node02   36m          3%     683Mi           39%   



[root@master mainfest]# kubectl top pods -n kube-system
NAME                                     CPU(cores)   MEMORY(bytes)   
canal-nbspn                              21m          52Mi            
canal-pj6rx                              13m          43Mi            
canal-rgsnp                              12m          43Mi            
coredns-78d4cf999f-6cb69                 2m           10Mi            
coredns-78d4cf999f-tflpn                 2m           10Mi            
etcd-master                              16m          121Mi           
kube-apiserver-master                    31m          517Mi           
kube-controller-manager-master           39m          82Mi            
kube-flannel-ds-amd64-5zrk7              2m           14Mi            
kube-flannel-ds-amd64-pql5n              2m           12Mi            
kube-flannel-ds-amd64-ssd29              2m           14Mi            
kube-proxy-ch4vp                         2m           15Mi            
kube-proxy-cz2rf                         2m           23Mi            
kube-proxy-kdp7d                         4m           21Mi            
kube-scheduler-master                    10m          21Mi            
kubernetes-dashboard-6f9998798-klf4t     1m           15Mi            
metrics-server-v0.3.1-65bd5d59b9-xvmns   1m           20Mi 



[root@master metrics-server]# kubectl top pod -l k8s-app=kube-dns --containers=true -n kube-system
POD                        NAME      CPU(cores)   MEMORY(bytes)   
coredns-78d4cf999f-6cb69   coredns   2m           10Mi            
coredns-78d4cf999f-tflpn   coredns   2m           10Mi

Prometheus

概述

除了前面的資源指標(如CPU、內存)之外,用戶或管理員須要瞭解更多的指標數據,好比Kubernetes指標、容器指標、節點資源指標以及應用程序指標等等。自定義指標API容許請求任意的指標,其指標API的實現要指定相應的後端監視系統。而Prometheus是第一個開發了相應適配器的監控系統。這個適用於Prometheus的Kubernetes Customm Metrics Adapter是屬於Github上的k8s-prometheus-adapter項目提供的。其原理圖以下:

prometheus自己就是一監控系統,也分爲server端和agent端,server端從被監控主機獲取數據,而agent端須要部署一個node_exporter,主要用於數據採集和暴露節點的數據,那麼 在獲取Pod級別或者是mysql等多種應用的數據,也是須要部署相關的exporter。咱們能夠經過PromQL的方式對數據進行查詢,可是因爲自己prometheus屬於第三方的 解決方案,原生的k8s系統並不能對Prometheus的自定義指標進行解析,就須要藉助於k8s-prometheus-adapter將這些指標數據查詢接口轉換爲標準的Kubernetes自定義指標。

Prometheus是一個開源的服務監控系統和時序數據庫,其提供了通用的數據模型和快捷數據採集、存儲和查詢接口。它的核心組件Prometheus服務器按期從靜態配置的監控目標或者基於服務發現自動配置的目標中進行拉取數據,新拉取到啊的 數據大於配置的內存緩存區時,數據就會持久化到存儲設備當中。Prometheus組件架構圖以下:

每一個被監控的主機均可以經過專用的exporter程序提供輸出監控數據的接口,並等待Prometheus服務器週期性的進行數據抓取。若是存在告警規則,則抓取到數據以後會根據規則進行計算,知足告警條件則會生成告警,併發送到Alertmanager完成告警的彙總和分發。當被監控的目標有主動推送數據的需求時,能夠以Pushgateway組件進行接收並臨時存儲數據,而後等待Prometheus服務器完成數據的採集。

任何被監控的目標都須要事先歸入到監控系統中才能進行時序數據採集、存儲、告警和展現,監控目標能夠經過配置信息以靜態形式指定,也可讓Prometheus經過服務發現的機制進行動態管理。下面是組件的一些解析:

  • 監控代理程序:如node_exporter:收集主機的指標數據,如平均負載、CPU、內存、磁盤、網絡等等多個維度的指標數據。
  • kubelet(cAdvisor):收集容器指標數據,也是K8S的核心指標收集,每一個容器的相關指標數據包括:CPU使用率、限額、文件系統讀寫限額、內存使用率和限額、網絡報文發送、接收、丟棄速率等等。
  • API Server:收集API Server的性能指標數據,包括控制隊列的性能、請求速率和延遲時長等等
  • etcd:收集etcd存儲集羣的相關指標數據
  • kube-state-metrics:該組件能夠派生出k8s相關的多個指標數據,主要是資源類型相關的計數器和元數據信息,包括制定類型的對象總數、資源限額、容器狀態以及Pod資源標籤系列等。

Prometheus可以直接把KubernetesAPIServer做爲服務發現系統使用進而動態發現和監控集羣中的全部可被監控的對象。這裏須要特別說明的是,Pod資源須要添加下列註解信息才能被Prometheus系統自動發現並抓取其內建的指標數據。

  • prometheus.io/scrape: 用於標識是否須要被採集指標數據,布爾型值,true或false。
  • prometheus.io/path: 抓取指標數據時使用的URL路徑,通常爲/metrics。
  • prometheus.io/port :抓取指標數據時使用的套接字端口,如8080。

另外,僅指望Prometheus爲後端生成自定義指標時僅部署Prometheus服務器便可,它甚至也不須要數據持久功能。但若要配置完整功能的監控系統,管理員還須要在每一個主機上部署node_exporter、按需部署其餘特有類型的exporter以及Alertmanager。

部署prometheus

官方地址 :https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus
因爲官方的YAML部署方式須要使用到PVC,這裏使用馬哥提供的學習類型的部署,具體生產仍是須要根據官方的建議進行。

[root@master metrics]# git clone https://github.com/iKubernetes/k8s-prom.git 
正克隆到 'k8s-prom'...
remote: Enumerating objects: 49, done.
remote: Total 49 (delta 0), reused 0 (delta 0), pack-reused 49
Unpacking objects: 100% (49/49), done.

建立名稱空間prom

[root@master metrics]# cd k8s-prom/
[root@master k8s-prom]# ls
k8s-prometheus-adapter  kube-state-metrics  namespace.yaml  node_exporter  podinfo  prometheus  README.md
[root@master k8s-prom]# kubectl apply -f namespace.yaml
namespace/prom created

部署node_exporter

[root@master k8s-prom]# cd node_exporter/
[root@master node_exporter]# kubectl apply -f .
daemonset.apps/prometheus-node-exporter created
service/prometheus-node-exporter created

[root@master node_exporter]# kubectl get ds -n prom
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
prometheus-node-exporter   3         3         3       3            3           <none>          100s
[root@master node_exporter]# kubectl get pods -n prom  
NAME                             READY   STATUS    RESTARTS   AGE
prometheus-node-exporter-b2lk5   1/1     Running   0          104s
prometheus-node-exporter-d4l6v   1/1     Running   0          104s
prometheus-node-exporter-swngp   1/1     Running   0          104s

部署prometheus-server

[root@master node_exporter]# cd ../prometheus/
[root@master prometheus]# ll
總用量 24
-rw-r--r-- 1 root root 10132 4月  10 11:20 prometheus-cfg.yaml
-rw-r--r-- 1 root root  1481 4月  10 11:20 prometheus-deploy.yaml
-rw-r--r-- 1 root root   716 4月  10 11:20 prometheus-rbac.yaml
-rw-r--r-- 1 root root   278 4月  10 11:20 prometheus-svc.yaml
[root@master prometheus]# kubectl apply -f .
configmap/prometheus-config created
deployment.apps/prometheus-server created
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
#因爲prometheus的yaml內內存limit爲2G,此時node節點虛擬機均不知足要求,致使會一直是pending狀態,此處進行修改,
[root@master prometheus]# vim prometheus-deploy.yaml
        #resources:
         # limits:
          #  memory: 2Gi
[root@master prometheus]# kubectl apply -f prometheus-deploy.yaml
deployment.apps/prometheus-server configured

[root@master prometheus]# kubectl get pods -n prom -w
NAME                                 READY   STATUS    RESTARTS   AGE
prometheus-node-exporter-b2lk5       1/1     Running   0          9m30s
prometheus-node-exporter-d4l6v       1/1     Running   0          9m30s
prometheus-node-exporter-swngp       1/1     Running   0          9m30s
prometheus-server-556b8896d6-ld7xj   1/1     Running   0          35s

部署後查看日誌

[root@master prometheus]# kubectl logs prometheus-server-556b8896d6-ld7xj -n prom
level=info ts=2019-04-10T03:33:57.752158604Z caller=main.go:220 msg="Starting Prometheus" version="(version=2.2.1, branch=HEAD, revision=bc6058c81272a8d938c05e75607371284236aadc)"
level=info ts=2019-04-10T03:33:57.752221598Z caller=main.go:221 build_context="(go=go1.10, user=root@149e5b3f0829, date=20180314-14:15:45)"
level=info ts=2019-04-10T03:33:57.752240032Z caller=main.go:222 host_details="(Linux 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 prometheus-server-556b8896d6-ld7xj (none))"
level=info ts=2019-04-10T03:33:57.752255713Z caller=main.go:223 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-04-10T03:33:57.755420653Z caller=main.go:504 msg="Starting TSDB ..."
level=info ts=2019-04-10T03:33:57.7620657Z caller=web.go:382 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-04-10T03:33:57.7632425Z caller=main.go:514 msg="TSDB started"
level=info ts=2019-04-10T03:33:57.764611774Z caller=main.go:588 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2019-04-10T03:33:57.765669001Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-10T03:33:57.76626263Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-10T03:33:57.76668914Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-10T03:33:57.767331363Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-10T03:33:57.768433541Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-10T03:33:57.768948262Z caller=main.go:491 msg="Server is ready to receive web requests."

此時可使用NodeIP:30090 進行訪問,並能夠查看監控,內部已經內置了了一些監控指標


部署kube-state-metrics

[root@master prometheus]# cd ../kube-state-metrics/
#修改 kube-state-metrics-deploy.yaml內的image地址
        image: xiaobai20201/kube-state-metrics-amd64:v1.3.1

[root@master kube-state-metrics]# kubectl apply -f .
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created

[root@master kube-state-metrics]# kubectl get pods -n prom          
NAME                                 READY   STATUS    RESTARTS   AGE
kube-state-metrics-84c69bb8-87l7n    1/1     Running   0          19s
prometheus-node-exporter-b2lk5       1/1     Running   0          21m
prometheus-node-exporter-d4l6v       1/1     Running   0          21m
prometheus-node-exporter-swngp       1/1     Running   0          21m
prometheus-server-556b8896d6-ld7xj   1/1     Running   0          12m

製做證書
因爲默認狀況下K8S集羣都是基於https提供服務,而默認狀況k8s-prometheus-adapter是基於http服務,須要提供該K8S服務器CA簽署承認的證書,因此須要自制證書

[root@master kube-state-metrics]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out serving.key)
Generating RSA private key, 2048 bit long modulus
.........+++
..+++
e is 65537 (0x10001)

[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key


[root@master pki]#  kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key -n prom
secret/cm-adapter-serving-certs created
[root@master pki]# kubectl get secret -n prom
NAME                             TYPE                                  DATA   AGE
cm-adapter-serving-certs         Opaque                                2      13s
default-token-r88nt              kubernetes.io/service-account-token   3      37m
kube-state-metrics-token-4rrqw   kubernetes.io/service-account-token   3      14m
prometheus-token-jdm5f           kubernetes.io/service-account-token   3      31m

部署k8s-prometheus-adapter
這裏自帶的custom-metrics-apiserver-deployment.yaml和custom-metrics-config-map.yaml有點問題,須要下載k8s-prometheus-adapter項目中的這2個文件

[root@master k8s-prometheus-adapter]#  wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml
[root@master k8s-prometheus-adapter]#  wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml 
#修改下載文件的內容的namespace爲prom

執行

[root@master k8s-prometheus-adapter]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
configmap/adapter-config created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created

[root@master k8s-prometheus-adapter]# kubectl get pods -n prom
NAME                                      READY   STATUS    RESTARTS   AGE
custom-metrics-apiserver-c86bfc77-dtkcn   1/1     Running   0          58s
kube-state-metrics-84c69bb8-87l7n         1/1     Running   0          140m
prometheus-node-exporter-b2lk5            1/1     Running   0          161m
prometheus-node-exporter-d4l6v            1/1     Running   0          161m
prometheus-node-exporter-swngp            1/1     Running   0          161m
prometheus-server-556b8896d6-ld7xj        1/1     Running   0          152m

[root@master k8s-prometheus-adapter]# kubectl api-versions |grep custom
custom.metrics.k8s.io/v1beta1

Grafana數據展現

[root@master metrics]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: prom    #修更名稱空間
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: grafana
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:    #這裏使用的是原先的heapster的grafana的配置文件,須要註釋掉這個環境變量
        #- name: INFLUXDB_HOST
        #        #  value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: prom
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
    
[root@master metrics]# kubectl apply -f grafana.yaml
deployment.apps/monitoring-grafana created
service/monitoring-grafana created

[root@master metrics]# kubectl get pods -n prom 
NAME                                      READY   STATUS    RESTARTS   AGE
custom-metrics-apiserver-c86bfc77-dtkcn   1/1     Running   0          8m56s
kube-state-metrics-84c69bb8-87l7n         1/1     Running   0          148m
monitoring-grafana-dcf785fd8-f7q4g        1/1     Running   0          2m4s
prometheus-node-exporter-b2lk5            1/1     Running   0          169m
prometheus-node-exporter-d4l6v            1/1     Running   0          169m
prometheus-node-exporter-swngp            1/1     Running   0          169m
prometheus-server-556b8896d6-ld7xj        1/1     Running   0          160m

[root@master metrics]# kubectl get svc -n prom
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
custom-metrics-apiserver   ClusterIP   10.107.119.218   <none>        443/TCP          9m16s
kube-state-metrics         ClusterIP   10.103.206.116   <none>        8080/TCP         149m
monitoring-grafana         NodePort    10.109.0.252     <none>        80:30215/TCP     2m23s
prometheus                 NodePort    10.101.97.208    <none>        9090:30090/TCP   166m
prometheus-node-exporter   ClusterIP   None             <none>        9100/TCP         169m

monitoring-grafana暴露端口爲30215
使用瀏覽器訪問 http://10.0.0.10:30215

默認是沒有kubernetes的模板的,能夠到grafana.com中去下載相關的kubernetes模板。
https://grafana.com/dashboards

參考資料

https://www.cnblogs.com/linuxk 馬永亮. Kubernetes進階實戰 (雲計算與虛擬化技術叢書) Kubernetes-handbook-jimmysong-20181218

相關文章
相關標籤/搜索