1.建立命名空間node
新建一個yaml文件命名爲monitor-namespace.yaml,寫入以下內容:json
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
執行以下命令建立monitoring命名空間:api
kubectl create -f monitor-namespace.yaml
2.建立ClusterRole瀏覽器
你須要對上面建立的命名空間分配集羣的讀取權限,以便Prometheus能夠經過Kubernetes的API獲取集羣的資源目標。app
新建一個yaml文件命名爲cluster-role.yaml,寫入以下內容:ide
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prometheus rules: - apiGroups: [""] resources: - nodes - nodes/proxy - services - endpoints - pods verbs: ["get", "list", "watch"] - apiGroups: - extensions resources: - ingresses verbs: ["get", "list", "watch"] - nonResourceURLs: ["/metrics"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: default namespace: monitoring
執行以下命令建立:lua
kubectl create -f cluster-role.yaml
3.建立Config Mapspa
咱們須要建立一個Config Map保存後面建立Prometheus容器用到的一些配置,這些配置包含了從Kubernetes集羣中動態發現pods和運行中的服務。
新建一個yaml文件命名爲config-map.yaml,寫入以下內容:3d
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-server-conf labels: name: prometheus-server-conf namespace: monitoring data: prometheus.yml: |- global: scrape_interval: 5s evaluation_interval: 5s scrape_configs: - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: 'kubernetes-cadvisor' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name
執行以下命令進行建立:代理
kubectl create -f config-map.yaml -n monitoring
4.建立Deployment模式的Prometheus
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: prometheus-deployment namespace: monitoring spec: replicas: 1 template: metadata: labels: app: prometheus-server spec: containers: - name: prometheus image: prom/prometheus:v2.3.2 args: - "--config.file=/etc/prometheus/prometheus.yml" - "--storage.tsdb.path=/prometheus/" ports: - containerPort: 9090 volumeMounts: - name: prometheus-config-volume mountPath: /etc/prometheus/ - name: prometheus-storage-volume mountPath: /prometheus/ volumes: - name: prometheus-config-volume configMap: defaultMode: 420 name: prometheus-server-conf - name: prometheus-storage-volume emptyDir: {}
使用以下命令部署:
kubectl create -f prometheus-deployment.yaml --namespace=monitoring
部署完成後經過dashboard可以看到以下的界面:
這裏有兩種方式
1.經過kubectl命令進行端口代理
2.針對Prometheus的POD暴露一個服務,推薦此種方式
首先新建一個yaml文件命名爲prometheus-service.yaml,寫入以下內容:
apiVersion: v1 kind: Service metadata: name: prometheus-service spec: selector: app: prometheus-server type: NodePort ports: - port: 9090 targetPort: 9090 nodePort: 30909
執行以下命令建立服務:
kubectl create -f prometheus-service.yaml --namespace=monitoring
經過如下命令查看Service的狀態,咱們能夠看到暴露的端口是30909:
kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus-service NodePort 10.101.186.82 <none> 9090:30909/TCP 100m
如今能夠經過瀏覽器訪問【http://虛擬機IP:30909】,看到以下界面,如今能夠點擊 status –> Targets,立刻就能夠看到全部Kubernetes集羣上的Endpoint經過服務發現的方式自動鏈接到了Prometheus。:
咱們還能夠經過圖形化界面查看內存:
OK,到這裏Prometheus部署就算完成了,可是數據的統計明顯不夠直觀,因此咱們須要使用Grafana來構建更加友好的監控頁面。
新建如下yaml文件:grafana-dashboard-provider.yaml
apiVersion: v1 kind: ConfigMap metadata: name: grafana-dashboard-provider namespace: monitoring data: default-dashboard.yaml: | - name: 'default' org_id: 1 folder: '' type: file options: folder: /var/lib/grafana/dashboards
grafana.yaml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: grafana namespace: monitoring labels: app: grafana component: core spec: replicas: 1 template: metadata: labels: app: grafana component: core spec: containers: - image: grafana/grafana:5.0.0 name: grafana ports: - containerPort: 3000 resources: limits: cpu: 100m memory: 100Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: grafana-persistent-storage mountPath: /var - name: grafana-dashboard-provider mountPath: /etc/grafana/provisioning/dashboards volumes: - name: grafana-dashboard-provider configMap: name: grafana-dashboard-provider - name: grafana-persistent-storage emptyDir: {}
grafana-service.yaml:
apiVersion: v1 kind: Service metadata: labels: name: grafana name: grafana namespace: monitoring spec: type: NodePort selector: app: grafana ports: - protocol: TCP port: 3000 targetPort: 3000 nodePort: 30300
執行以下命令進行建立:
kubectl apply -f grafana-dashboard-provider.yaml kubectl apply -f grafana.yaml kubectl apply -f grafana-service.yaml
部署完成後經過Kubernetes Dashboard能夠看到:
根據服務暴露出來的端口30300經過瀏覽器訪問【http://虛擬機IP:30300】看到以下界面:
輸入用戶名和密碼(admin/admin)便可登陸。
接着咱們配置數據源:
而後導入Dashboards:
將JSON文件上傳
而後點擊導入:
而後就能夠看到Kubernetes集羣的監控數據了:
還有一個資源統計的Dashboards:
kubernetes-resources-usage-dashboard.json
OK,Prometheus的監控搭建到此結束。