容器的監控方案有多種,如單臺docker主機的監控,能夠使用docker stats或者cAdvisor web頁面進行監控。但針對於Kubernetes這種容器編排工具而言docker單主機的監控已經不足以知足需求,在Kubernetes的生態圈中也誕生了一個個監控方案,如經常使用的dashboard,部署cAdvisor+Heapster+InfluxDB+Grafana監控方案,部署Prometheus和Grafana監控方案等。在這裏主要講述一下cAdvisor+Heapster監控方案。前端
# docker stats
node
Google的 cAdvisor 是另外一個知名的開源容器監控工具,cAdvisor是docker容器建立後自動起的一個容器進程,用戶能夠經過Web界面訪問當前節點和容器的性能數據(CPU、內存、網絡、磁盤、文件系統等等),很是詳細。cAdvisor能夠直接經過訪問docker主機的4194端口訪問。git
1、集羣監控原理
github
cAdvisor:容器數據收集。
Heapster:集羣監控數據收集,彙總全部節點監控數據。
InfluxDB:時序數據庫,存儲監控數據。
Grafana:可視化展現。web
由圖可知,cAdvisor用於獲取k8s node節點上的容器數據,內存,CPU,Disk用量,網絡流量等,cAdvisor只支持實時存儲,不支持持久化存儲,由Heapster彙總全部節點的數據,交由InfluxDB來作持久化存儲,最後再由Grafana做爲前端的Web展現頁面來使用。docker
2、搭建cAdvisor+Heapster+InfluxDB+Grafana數據庫
①從官網上拉取安裝包api
獲取v1.5.2heapster+influxdb+grafana安裝yaml文件到 heapster release 頁面下載最新版本的 heapster:
# wget https://github.com/kubernetes/heapster/archive/v1.5.2.zip
瀏覽器
# unzip v1.5.2.zip網絡
②修改yaml文件,如鏡像路徑(國外鏡像,不×××下載不了)
修改influxdb.yam並啓用
注意:數據庫須要最早啓用,後面收集到的數據才能保存
[root@node-1 monitor]# cat influxdb.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google-containers/heapster-influxdb-amd64:v1.1.1
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
type: NodePort
ports:
- nodePort: 31001
port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
# kubectl create -f influxdb.yaml
修改heapster.yaml
[root@node-1 monitor]# cat heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: registry.cn-hangzhou.aliyuncs.com/google-containers/heapster-amd64:v1.4.2
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
# kubectl create -f heapster.yaml
修改grafana.yaml
[root@node-1 monitor]# cat grafana.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: registry.cn-hangzhou.aliyuncs.com/google-containers/heapster-grafana-amd64:v4.4.1
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
type: NodePort
ports:
- nodePort: 30108
port: 80
targetPort: 3000
selector:
k8s-app: grafana
# kubectl create -f grafana.yaml
瀏覽器訪問:
訪問grafana中指定的nodeport,看在那個node節點上
頁面訪問:
查看監控內容:
Home選項下的Cluster是查看node節點的相關監控狀態
Home選項下的Pods是查看由node節點收集來的相關namespace下的pod主機的監控內容
以下圖選定node主機節點,監控CPU,內存等狀態參數信息,能夠指定時間區間進行查看
以下圖選定namespaces下的指定pod,對該pod的狀態參數進行監控