prometheus的dashboard雖然號稱擁有多種多樣的圖表,可是實在太簡陋了,通常都用專業的grafana工具來出圖
grafana官方dockerhub地址
grafana官方github地址
grafana官網html
docker pull grafana/grafana:5.4.2 docker tag 6f18ddf9e552 harbor.zq.com/infra/grafana:v5.4.2 docker push harbor.zq.com/infra/grafana:v5.4.2
準備目錄node
mkdir /data/k8s-yaml/grafana cd /data/k8s-yaml/grafana
cat >rbac.yaml <<'EOF' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: grafana rules: - apiGroups: - "*" resources: - namespaces - deployments - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: grafana roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: grafana subjects: - kind: User name: k8s-node EOF
cat >dp.yaml <<'EOF' apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: grafana name: grafana name: grafana namespace: infra spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: name: grafana strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: grafana name: grafana spec: containers: - name: grafana image: harbor.zq.com/infra/grafana:v5.4.2 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 protocol: TCP volumeMounts: - mountPath: /var/lib/grafana name: data imagePullSecrets: - name: harbor securityContext: runAsUser: 0 volumes: - nfs: server: hdss7-200 path: /data/nfs-volume/grafana name: data EOF
建立frafana數據目錄git
mkdir /data/nfs-volume/grafana
cat >svc.yaml <<'EOF' apiVersion: v1 kind: Service metadata: name: grafana namespace: infra spec: ports: - port: 3000 protocol: TCP targetPort: 3000 selector: app: grafana EOF
cat >ingress.yaml <<'EOF' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana namespace: infra spec: rules: - host: grafana.zq.com http: paths: - path: / backend: serviceName: grafana servicePort: 3000 EOF
vi /var/named/zq.com.zone grafana A 10.4.7.10 systemctl restart named
kubectl apply -f http://k8s-yaml.zq.com/grafana/rbac.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/svc.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/ingress.yaml
訪問http://grafana.zq.com,默認用戶名密碼admin/admin
能成功訪問表示安裝成功
進入後當即修改管理員密碼爲admin123
github
grafana確認啓動好之後,須要進入grafana容器內部,安裝如下插件docker
kubectl -n infra exec -it grafana-d6588db94-xr4s6 /bin/bash # 如下命令在容器內執行 grafana-cli plugins install grafana-kubernetes-app grafana-cli plugins install grafana-clock-panel grafana-cli plugins install grafana-piechart-panel grafana-cli plugins install briangann-gauge-panel grafana-cli plugins install natel-discrete-panel
添加數據源,依次點擊:左側鋸齒圖標-->add data source-->Prometheus
api
添加完成後重啓grafana瀏覽器
kubectl -n infra delete pod grafana-7dd95b4c8d-nj5cx
啓用K8S插件,依次點擊:左側鋸齒圖標-->Plugins-->kubernetes-->Enable
新建cluster,依次點擊:左側K8S圖標-->New Cluster
bash
添加完須要稍等幾分鐘,在沒有取到數據以前,會報http forbidden,不要緊,等一會就好。大概2-5分鐘。
app
點擊Cluster Dashboard
curl
docker pull docker.io/prom/alertmanager:v0.14.0 docker tag 23744b2d645c harbor.zq.com/infra/alertmanager:v0.14.0 docker push harbor.zq.com/infra/alertmanager:v0.14.0
準備目錄
mkdir /data/k8s-yaml/alertmanager cd /data/k8s-yaml/alertmanager
cat >cm.yaml <<'EOF' apiVersion: v1 kind: ConfigMap metadata: name: alertmanager-config namespace: infra data: config.yml: |- global: # 在沒有報警的狀況下聲明爲已解決的時間 resolve_timeout: 5m # 配置郵件發送信息 smtp_smarthost: 'smtp.163.com:25' smtp_from: 'xxx@163.com' smtp_auth_username: 'xxx@163.com' smtp_auth_password: 'xxxxxx' smtp_require_tls: false templates: - '/etc/alertmanager/*.tmpl' # 全部報警信息進入後的根路由,用來設置報警的分發策略 route: # 這裏的標籤列表是接收到報警信息後的從新分組標籤,例如,接收到的報警信息裏面有許多具備 cluster=A 和 alertname=LatncyHigh 這樣的標籤的報警信息將會批量被聚合到一個分組裏面 group_by: ['alertname', 'cluster'] # 當一個新的報警分組被建立後,須要等待至少group_wait時間來初始化通知,這種方式能夠確保您能有足夠的時間爲同一分組來獲取多個警報,而後一塊兒觸發這個報警信息。 group_wait: 30s # 當第一個報警發送後,等待'group_interval'時間來發送新的一組報警信息。 group_interval: 5m # 若是一個報警信息已經發送成功了,等待'repeat_interval'時間來從新發送他們 repeat_interval: 5m # 默認的receiver:若是一個報警沒有被一個route匹配,則發送給默認的接收器 receiver: default receivers: - name: 'default' email_configs: - to: 'xxxx@qq.com' send_resolved: true html: '{{ template "email.to.html" . }}' headers: { Subject: " {{ .CommonLabels.instance }} {{ .CommonAnnotations.summary }}" } email.tmpl: | {{ define "email.to.html" }} {{- if gt (len .Alerts.Firing) 0 -}} {{ range .Alerts }} 告警程序: prometheus_alert <br> 告警級別: {{ .Labels.severity }} <br> 告警類型: {{ .Labels.alertname }} <br> 故障主機: {{ .Labels.instance }} <br> 告警主題: {{ .Annotations.summary }} <br> 觸發時間: {{ .StartsAt.Format "2006-01-02 15:04:05" }} <br> {{ end }}{{ end -}} {{- if gt (len .Alerts.Resolved) 0 -}} {{ range .Alerts }} 告警程序: prometheus_alert <br> 告警級別: {{ .Labels.severity }} <br> 告警類型: {{ .Labels.alertname }} <br> 故障主機: {{ .Labels.instance }} <br> 告警主題: {{ .Annotations.summary }} <br> 觸發時間: {{ .StartsAt.Format "2006-01-02 15:04:05" }} <br> 恢復時間: {{ .EndsAt.Format "2006-01-02 15:04:05" }} <br> {{ end }}{{ end -}} {{- end }} EOF
cat >dp.yaml <<'EOF' apiVersion: extensions/v1beta1 kind: Deployment metadata: name: alertmanager namespace: infra spec: replicas: 1 selector: matchLabels: app: alertmanager template: metadata: labels: app: alertmanager spec: containers: - name: alertmanager image: harbor.zq.com/infra/alertmanager:v0.14.0 args: - "--config.file=/etc/alertmanager/config.yml" - "--storage.path=/alertmanager" ports: - name: alertmanager containerPort: 9093 volumeMounts: - name: alertmanager-cm mountPath: /etc/alertmanager volumes: - name: alertmanager-cm configMap: name: alertmanager-config imagePullSecrets: - name: harbor EOF
cat >svc.yaml <<'EOF' apiVersion: v1 kind: Service metadata: name: alertmanager namespace: infra spec: selector: app: alertmanager ports: - port: 80 targetPort: 9093 EOF
kubectl apply -f http://k8s-yaml.zq.com/alertmanager/cm.yaml kubectl apply -f http://k8s-yaml.zq.com/alertmanager/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/alertmanager/svc.yaml
cat >/data/nfs-volume/prometheus/etc/rules.yml <<'EOF' groups: - name: hostStatsAlert rules: - alert: hostCpuUsageAlert expr: sum(avg without (cpu)(irate(node_cpu{mode!='idle'}[5m]))) by (instance) > 0.85 for: 5m labels: severity: warning annotations: summary: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)" - alert: hostMemUsageAlert expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85 for: 5m labels: severity: warning annotations: summary: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)" - alert: OutOfInodes expr: node_filesystem_free{fstype="overlay",mountpoint ="/"} / node_filesystem_size{fstype="overlay",mountpoint ="/"} * 100 < 10 for: 5m labels: severity: warning annotations: summary: "Out of inodes (instance {{ $labels.instance }})" description: "Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})" - alert: OutOfDiskSpace expr: node_filesystem_free{fstype="overlay",mountpoint ="/rootfs"} / node_filesystem_size{fstype="overlay",mountpoint ="/rootfs"} * 100 < 10 for: 5m labels: severity: warning annotations: summary: "Out of disk space (instance {{ $labels.instance }})" description: "Disk is almost full (< 10% left) (current value: {{ $value }})" - alert: UnusualNetworkThroughputIn expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100 for: 5m labels: severity: warning annotations: summary: "Unusual network throughput in (instance {{ $labels.instance }})" description: "Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})" - alert: UnusualNetworkThroughputOut expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100 for: 5m labels: severity: warning annotations: summary: "Unusual network throughput out (instance {{ $labels.instance }})" description: "Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})" - alert: UnusualDiskReadRate expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50 for: 5m labels: severity: warning annotations: summary: "Unusual disk read rate (instance {{ $labels.instance }})" description: "Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})" - alert: UnusualDiskWriteRate expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50 for: 5m labels: severity: warning annotations: summary: "Unusual disk write rate (instance {{ $labels.instance }})" description: "Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})" - alert: UnusualDiskReadLatency expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100 for: 5m labels: severity: warning annotations: summary: "Unusual disk read latency (instance {{ $labels.instance }})" description: "Disk latency is growing (read operations > 100ms) (current value: {{ $value }})" - alert: UnusualDiskWriteLatency expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100 for: 5m labels: severity: warning annotations: summary: "Unusual disk write latency (instance {{ $labels.instance }})" description: "Disk latency is growing (write operations > 100ms) (current value: {{ $value }})" - name: http_status rules: - alert: ProbeFailed expr: probe_success == 0 for: 1m labels: severity: error annotations: summary: "Probe failed (instance {{ $labels.instance }})" description: "Probe failed (current value: {{ $value }})" - alert: StatusCode expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400 for: 1m labels: severity: error annotations: summary: "Status Code (instance {{ $labels.instance }})" description: "HTTP status code is not 200-399 (current value: {{ $value }})" - alert: SslCertificateWillExpireSoon expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30 for: 5m labels: severity: warning annotations: summary: "SSL certificate will expire soon (instance {{ $labels.instance }})" description: "SSL certificate expires in 30 days (current value: {{ $value }})" - alert: SslCertificateHasExpired expr: probe_ssl_earliest_cert_expiry - time() <= 0 for: 5m labels: severity: error annotations: summary: "SSL certificate has expired (instance {{ $labels.instance }})" description: "SSL certificate has expired already (current value: {{ $value }})" - alert: BlackboxSlowPing expr: probe_icmp_duration_seconds > 2 for: 5m labels: severity: warning annotations: summary: "Blackbox slow ping (instance {{ $labels.instance }})" description: "Blackbox ping took more than 2s (current value: {{ $value }})" - alert: BlackboxSlowRequests expr: probe_http_duration_seconds > 2 for: 5m labels: severity: warning annotations: summary: "Blackbox slow requests (instance {{ $labels.instance }})" description: "Blackbox request took more than 2s (current value: {{ $value }})" - alert: PodCpuUsagePercent expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),"pod","$1","container_label_io_kubernetes_pod_name", "(.*)"))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores *100 )by(container,namespace,node,pod,severity) > 80 for: 5m labels: severity: warning annotations: summary: "Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)" EOF
在prometheus配置文件中追加配置:
cat >>/data/nfs-volume/prometheus/etc/prometheus.yml <<'EOF' alerting: alertmanagers: - static_configs: - targets: ["alertmanager"] rule_files: - "/data/etc/rules.yml" EOF
重載配置:
curl -X POST http://prometheus.zq.com/-/reload
以上這些就是咱們的告警規則
把test命名空間裏的dubbo-demo-service給停掉
blackbox裏信息已報錯,alert裏面項目變黃了
等到alert中項目變爲紅色的時候就開會發郵件告警
若是須要本身定製告警規則和告警內容,須要研究一下promql,本身修改配置文件。