通常,咱們從網上看到的帖子和資料,node
都是用prometheus監控k8s的各項資源,docker
如api server, namespace, pod, node等。api
那若是是本身的業務pod上的自定義metrics呢?app
好比,一個業務pod開放了/xxx/metrics,lua
那麼,若是用prometheus來抓取呢?spa
這裏,咱們就會用到kubernetes-pods這樣一個job。server
而後,在業務的deployment中,加annotation來配合抓取配置。內存
以下:資源
prometheus-configmap-pod.yamlget
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace: ns-monitor data: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name
上面yaml文件中source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path],
這樣的relabel含義就是:
若是在業務pod中,annotation定義了prometheus.io/path,那麼,prometheus就能夠抓取其自定義的metrics。
如,一個業務deployments定義以下:
apiVersion: apps/v1 kind: Deployment metadata: name: gw namespace: default spec: replicas: 3 selector: matchLabels: name: gw template: metadata: labels: name: gw annotations: prometheus.io/path: /xxx/metrics prometheus.io/port: "32456" prometheus.io/scrape: "true" spec: imagePullSecrets: - name: dockersecret containers: - name: gw ......
那麼,prometheus server加載prometheus.yml文件以後,
就會去抓取每一個業務pod的pod:32456/xxx/metrics的監控數據了。
若是現實是無極,那內存就是太極,CPU的做用只是力圖將線性化的空間還原爲立體化的空間。其間固然要涉及映射運算。