prometheus2.0 聯邦的配置

https://blog.51cto.com/lee90/2062252node

聯邦有不一樣的用例。它一般用於實現可擴展的prometheus,或者將metrics從一個服務的prometheus拉到另外一個Prometheus上用於展現。mysql

 

分層聯邦:web

分層聯邦容許普羅米修斯擴展到數十個數據中心和數百萬個節點的環境。在這種用例中,聯邦拓撲相似於一棵樹,更高級別的普羅米修斯服務器從大量的從屬服務器收集彙總的時間序列數據。sql

跨服務聯邦:json

In cross-service federation, a Prometheus server of one service is configured to scrape selected data from another service's Prometheus server to enable alerting and queries against both datasets within a single server.後端

在跨服務聯合中,一個服務的普羅米修斯服務器被配置爲從另外一個服務的普羅米修斯服務器中刮取選定的數據,以使得可以針對單個服務器內的兩個數據集進行警報和查詢。api

For example, a cluster scheduler running multiple services might expose resource usage information (like memory and CPU usage) about service instances running on the cluster. On the other hand, a service running on that cluster will only expose application-specific service metrics. Often, these two sets of metrics are scraped by separate Prometheus servers. Using federation, the Prometheus server containing service-level metrics may pull in the cluster resource usage metrics about its specific service from the cluster Prometheus, so that both sets of metrics can be used within that server.瀏覽器

例如,運行多個服務的集羣調度程序可能會暴露有關在集羣上運行的服務實例的資源使用狀況信息(如內存和CPU使用狀況)。另外一方面,在該羣集上運行的服務將僅公開特定於應用程序的服務度量標準。一般,這兩套指標是由單獨的普羅米修斯服務器來抓取的。使用聯邦,包含服務級別度量的普羅米修斯服務器能夠從集羣普羅米修斯拉入有關其特定服務的集羣資源使用度量,以便這兩組度量能夠在該服務器內使用。bash

【舉個例子:咱們要監控mysqld的運行狀態,可使用1個主Prometheus+2個分片Prometheus(一個用來採集node_exporter的metrics、一個用來採集mysql_exporter的metrics),而後在主Prometheus上作彙總】服務器

 

 

安裝prometheus和mysqld_exporter、postgres_exporter的步驟也不寫了,很簡單的,exporter的部署咱們一般用ansible或saltstack之類的工具批量分發。

 

 

我這裏實驗起見,在一臺機器上跑了3個shard節點,1個global節點。

Node1: 10.0.20.25 (跑了老版本的prometheus1.七、mysql_exporter、postgres_exporter,這是以前作實驗搭建的環境)

Node2: 10.0.20.26 (跑了prometheus2.0、mysql_exporter、postgres_exporter)

 

 

下面開始在Node2上開始咱們的聯邦的配置吧。

注意: 我這裏作實驗的時候,跨服務聯邦的配置還不太規範,照官方的說法是 採用多個獨立的Prometheus節點分別採集node_exporter、mysql_exporter、postgres_exporter 這些metrics,而後再在GLOBAL節點作彙總。

 

cd /usr/local/prometheus

 

編寫存放要採集主機的文件:

cat mysqld.json 

 

 [ { "targets": [ "10.0.20.26:9104", "10.0.20.26:9100", "10.0.20.25:9104", "10.0.20.25:9100" ], "labels": { "services": "dba_test", } } ]

cat pgsql.json

[ { "targets": [ "10.0.20.26:9187", "10.0.20.25:9187" ], "labels": { "services": "dba_test_pgsql", } } ]

 

 

3個Shard節點配置文件以下:

 

節點1,蒐集的是mysql的信息

cat prometheus1.yml

 global:

scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: 'mysql' file_sd_configs: - files: ['./mysqld.json']

 

節點2,蒐集的是pgsql的信息

cat prometheus2.yml

 

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: 'pgsql' file_sd_configs: - files: ['./pgsql.json']

 

節點3,蒐集的是prometheus節點的信息

cat prometheus3.yml

 

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: 'prometheus25' static_configs: - targets: ['10.0.20.25:9090'] - job_name: 'prometheus26' static_configs: - targets: ['10.0.20.26:9090']

 

而後,啓動3個分片節點:

 

./prometheus --web.listen-address="0.0.0.0:9091" --storage.tsdb.path="data1/" --config.file="prometheus1.yml" --web.enable-admin-api ./prometheus --web.listen-address="0.0.0.0:9092" --storage.tsdb.path="data2/" --config.file="prometheus2.yml" --web.enable-admin-api ./prometheus --web.listen-address="0.0.0.0:9093" --storage.tsdb.path="data3/" --config.file="prometheus3.yml" --web.enable-admin-api

 

再來配置GLOBAL節點:

cat prometheus.yml 內容以下:

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: 'federate' scrape_interval: 15s honor_labels: true metrics_path: '/federate' params: 'match[]': - '{job=~"prometheus.*"}' - '{job="mysql"}' - '{job="pgsql"}' static_configs: - targets: - '10.0.20.26:9091' - '10.0.20.26:9092' - '10.0.20.26:9093'

 

啓動GLOBAL節點:

./prometheus --web.listen-address="0.0.0.0:9090" --storage.tsdb.path="data_global/" --config.file="prometheus.yml" --web.enable-admin-api

 

配好後,整個目錄 以下圖:

image.png

 

 

而後,在瀏覽器訪問http://10.0.20.26:9090/targets  這個GLOBAL節點,效果以下圖:

image.png

能夠訪問原先的3個shard節點,獲取到對應的mysql、pgsql、prometheus的採集信息:

http://10.0.20.26:9091/graph

http://10.0.20.26:9092/graph

http://10.0.20.26:9093/graph

 

咱們能夠自測下,在http://10.0.20.26:9090/graph 這裏能夠採集到後端3個shard節點的所有數據的。

這樣,咱們就配好了prometheus的聯邦啦。是否是很簡單??

 

注意: 有些時候 ,咱們要求告警信息實時性特別高,這種狀況下,告警的採集(以granfana爲例)不建議使用鏈接到聯邦GLOBAL節點的方式,建議直連後端對應的Shard節點,以避免由於GLOBAL採集數據延遲,而致使發送告警的延遲狀況出現。

 

結合grafana:

而後,能夠在grafana裏面作展現了,直接上圖:

image.png

 

這裏填instance 的時候,我用了個通配符,一次性列出符合條件的所有主機,可是惟一的缺點是,這樣就不能用grafana自帶的告警啦,由於目前grafana的告警還不支持動態寫法,除非咱們寫死了node_load5{instance=~'10.0.20.26:9100'} 這樣才行。

image.png

 

image.png

 

下面是我配好的部分截圖:

 

image.png

相關文章
相關標籤/搜索