redis deployment 是 kubernetes 外面的 redis 從。prometheus 是經過 redis-exporter 監控 redis 的。 redis-exporter 和 redis 是部署在一個 pod 裏面的。本文中用到的 prometheus operator 中的 CustomResource 有 prometheus podmonitors。以下:node
$ k get prometheus NAME AGE k8s 3d5h $ k get podmonitors.monitoring.coreos.com --all-namespaces NAMESPACE NAME AGE default example-app 4h3m
監控的具體原理是 prometheus(CustomResource) 中配置 podmonitor,podmonitor 在配置中經過 label selector 選擇對應的 pod。剩下的就是 prometheus operator 中的邏輯了。linux
須要有已經安裝好了的 kubernetes 集羣。我這邊使用的 zsh,安裝了 kubectl 插件,還安裝了 kubens 插件,具體能夠從 github 上找。git
~$ k version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-aliyun.1", GitCommit:"51888f5", GitTreeState:"", BuildDate:"2019-10-16T08:29:13Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
具體能夠參考 github 上的介紹,我這邊是當前最新的版本。github
git clone https://github.com/coreos/kube-prometheus.git kubectl apply -f manifests/setup kubectl create -f manifests/
咱們此次是部署 redis 從到 kubernetes。下面都是在 default
namespace 下:web
# redis master 配置: $ ls master_redis.yaml redis2s.yaml podmonitor.yaml redis_config.yaml $ cat master_redis.yaml --- apiVersion: v1 kind: Service metadata: name: redis2 spec: ports: - port: 9911 --- apiVersion: v1 kind: Endpoints metadata: name: redis2 subsets: - addresses: - ip: 192.168.10.11 ports: - port: 9911 $ cat redis2s.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis2s spec: replicas: 2 template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9121" labels: app: redis2s kind: redis spec: containers: - name: redis image: redis:2.6 command: ["redis-server"] args: ["/etc/redis/redis2s.conf"] ports: - containerPort: 9911 volumeMounts: - name: v3redis-config mountPath: /etc/redis/ - name: redis-exporter image: oliver006/redis_exporter:latest args: ["--redis.addr","redis://localhost:9911","--redis.password","redis2s",] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 9121 volumes: - name: v3redis-config configMap: name: v3redis-config --- apiVersion: v1 kind: Service metadata: name: redis2s spec: ports: - port: 9911 selector: app: redis2s kind: redis $ cat redis_config.yaml apiVersion: v1 kind: ConfigMap metadata: name: v3redis-config namespace: default data: redis1.conf: | daemonize no pidfile /usr/local/redis/redis1.pid timeout 0 dir ./ redis2s.conf: | daemonize no pidfile /usr/local/redis/redis2s.pid port 9911 timeout 0 tcp-keepalive 60 loglevel notice logfile redis2s.log databases 16 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename redis2s.rdb dir ./ slaveof redis2 9911 slave-serve-stale-data yes slave-read-only yes $ cat podmonitor.yaml apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: example-app labels: team: frontend spec: selector: matchLabels: app: redis2s podMetricsEndpoints: - targetPort: 9121
而後執行下面的命令redis
k label namespaces default prometheus.monitor=true k apply -f .
下面是在 monitoring
namespace 下: 上面已經配置了 podmonitor(promethues-operator 中的 CustomResource),接下來配置 promethues-operator 中的 CR(CustomResource) Prometheus,其實只須要修改以下配置就好了:api
podMonitorNamespaceSelector: matchLabels: prometheus.monitor: true podMonitorSelector: matchLabels: team: frontend
完整的配置:app
k get prometheus k8s -o yaml apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"k8s"},"name":"k8s","namespace":"monitoring"},"spec":{"alerting":{"alertmanagers":[{"name":"alertmanager-main","namespace":"monitoring","port":"web"}]},"baseImage":"quay.io/prometheus/prometheus","nodeSelector":{"kubernetes.io/os":"linux"},"podMonitorNamespaceSelector":{},"podMonitorSelector":{},"replicas":2,"resources":{"requests":{"memory":"400Mi"}},"ruleSelector":{"matchLabels":{"prometheus":"k8s","role":"alert-rules"}},"securityContext":{"fsGroup":2000,"runAsNonRoot":true,"runAsUser":1000},"serviceAccountName":"prometheus-k8s","serviceMonitorNamespaceSelector":{},"serviceMonitorSelector":{},"version":"v2.11.0"}} project.cattle.io/namespaces: '["catalog","default","monitoring"]' creationTimestamp: "2019-12-17T01:41:43Z" generation: 4 labels: prometheus: k8s name: k8s namespace: monitoring resourceVersion: "37485079" selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/k8s uid: 60f7b6aa-206e-11ea-9e1d-064ec46212f4 spec: additionalScrapeConfigs: key: prometheus-additional.yaml name: additional-scrape-configs alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web baseImage: quay.io/prometheus/prometheus nodeSelector: kubernetes.io/os: linux podMonitorNamespaceSelector: matchLabels: prometheus.monitor: true podMonitorSelector: matchLabels: team: frontend replicas: 2 resources: requests: memory: 400Mi ruleSelector: matchLabels: prometheus: k8s role: alert-rules rules: alert: {} securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.11.0
這裏面還添加了一下自定義的 promethues 配置,若是想要添加能夠參考這個連接。frontend
redis-exporter 中監控項在grafana中的展現能夠參考這個 dashboard.tcp