Influxdb也是有influxdata公司(www.influxdata.com )開發的用於數據存儲的時間序列數據庫.可用於數據的時間排列。在整個TIG(Telegraf+influxdb+grafana)方案中,influxdb可算做一箇中間件,主要負責原始數據的存儲,並按照時間序列進行索引構建以提供時間序列查詢接口。在整個TIG方案中,應該先構建的就是Influxdb。linux
influxdb介紹:git
使用TSM(Time Structured Merge)存儲引擎,容許高攝取速度和數據壓縮;
使用go編寫,無需其餘依賴;
簡單,高性能寫查詢httpAPI接口;
支持其餘數據獲取協議的插件,好比graphite,collected,OpenTSDB;
使用relay構建高可用https://docs.influxdata.com/influxdb/v1.0/high_availability/relay/;
擴展的類sql語言,很容易查詢彙總數據;
tag的支持,可用讓查詢變的更加高效和快速;
保留策略有效地自動淘汰過時的數據;
持續所產生的自動計算的數據會使得頻繁的查詢更加高效;
web管理頁面的支持github
下載安裝:web
github:https://github.com/influxdata/influxdb 源碼編譯
官網下載:
Centos系列:wgethttps://dl.influxdata.com/influxdb/releases/influxdb-1.0.0.x86_64.rpm && sudo yum localinstall influxdb-1.0.0.x86_64.rpm
源碼包系列:wgethttps://dl.influxdata.com/influxdb/releases/influxdb-1.0.0_linux_amd64.tar.gz && tar xvfz influxdb-1.0.0_linux_amd64.tar.gz
docker系列:docker pull influxdb
安裝手冊:https://docs.influxdata.com/influxdb/v0.9/introduction/installation/sql
配置:docker
#cat /etc/influxdb/influxdb.conf reporting-disabled = false [registration] [meta] dir = "/var/lib/influxdb/meta" hostname = "10.0.0.2" #此hostname必須寫本機,不然沒法鏈接到數據操做的API bind-address = ":8088" retention-autocreate = true election-timeout = "1s" heartbeat-timeout = "1s" leader-lease-timeout = "500ms" commit-timeout = "50ms" cluster-tracing = false [data] dir = "/var/lib/influxdb/data" max-wal-size = 104857600 # Maximum size the WAL can reach before a flush. Defaults to 100MB. wal-flush-interval = "10m" # Maximum time data can sit in WAL before a flush. wal-partition-flush-delay = "2s" # The delay time between each WAL partition being flushed. wal-dir = "/var/lib/influxdb/wal" wal-logging-enabled = true [hinted-handoff] enabled = true dir = "/var/lib/influxdb/hh" max-size = 1073741824 max-age = "168h" retry-rate-limit = 0 retry-interval = "1s" retry-max-interval = "1m" purge-interval = "1h" [admin] enabled = true bind-address = ":8083" https-enabled = false https-certificate = "/etc/ssl/influxdb.pem" [http] enabled = true bind-address = ":8086" auth-enabled = false log-enabled = true write-tracing = false pprof-enabled = false https-enabled = false https-certificate = "/etc/ssl/influxdb.pem" [opentsdb] enabled = false [collectd] enabled = false
注意:
influxdb服務會啓動三個端口:8086爲服務的默認數據處理端口,主要用來influxdb數據庫的相關操做,可提供相關的API;8083爲管理員提供了一個可視化的web界面,用來爲用戶提供友好的可視化查詢與數據管理;8088主要爲了元數據的管理。須要注意的是,influxdb默認是須要influxdb用戶啓動,且數據存放在/var/lib/influxdb/下面,生產環境須要注意這個。shell
啓動:數據庫
和telegraf啓動方式同樣,可使用init.d或者systemd進行管理influxdb
注意,啓動以後須要查看相關的端口是否正在監聽,並檢查日誌確保服務正常啓動
api
使用:curl
若是說使用telegraf最核心的部分在配置,那麼influxdb最核心的就是SQL語言的使用了。influxdb默認支持三種操做方式:
登陸influxdb的shell中操做:
建立數據庫: create database mydb 建立用戶: create user "bigdata" with password 'bigdata' with all privileges 查看數據庫: show databases; 數據插入: insert bigdata,host=server001,regin=HC load=88 切換數據庫: use mydb 查看數據庫中有哪些measurement(相似數據庫中的表): show measurements 查詢: select * from cpu limit 2 查詢一小時前開始到如今結束的: #select load from cpu where time > now() - 1h 查詢從歷史紀元開始到1000天之間: #select load from cpu where time < now() + 1000d 查找一個時間區間: #select load from cpu where time > '2016-08-18' and time < '2016-09-19' 查詢一個小時間區間的數據,好比在September 18, 2016 21:24:00:後的6分鐘: #select load from cpu where time > '2016-09-18T21:24:00Z' +6m 使用正則查詢全部measurement的數據: #select * from /.*/ limit 1 #select * from /^docker/ limit 3 #select * from /.*mem.*/ limit 3 正則匹配加指定tag:(=~ !~) #select * from cpu where "host" !~ /.*HC.*/ limit 4 #SELECT * FROM "h2o_feet" WHERE ("location" =~ /.*y.*/ OR "location" =~ /.*m.*/) AND "water_level" > 0 LIMIT 4 排序:group by的用法必須得是在複合函數中進行使用 #select count(type) from events group by time(10s) #select count(type) from events group by time(10s),type 給查詢字段作tag: #select count(type) as number_of_types group by time(10m) #select count(type) from events group by time(1h) where time > now() - 3h 使用fill字段: #select count(type) from events group by time(1h) fill(0)/fill(-1)/fill(null) where time > now() - 3h 數據聚合: select count(type) from user_events merge admin_events group by time(10m)
使用API進行操做數據:
建立數據庫: curl -G "http://localhost:8086/query" --data-urlencode "q=create database mydb" 插入數據: curl -XPOST 'http://localhost:8086/write?db=mydb' -d 'biaoge,name=xxbandy,xingqu=coding age=2' curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000' curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02 value=0.67 cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257 cpu_load_short,direction=in,host=server01,region=us-west value=2.0 1422568543702900257' 將sql語句寫入文件,並經過api插入: #cat sql.txt cpu_load_short,host=server02 value=0.67 cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257 cpu_load_short,direction=in,host=server01,region=us-west value=2.0 1422568543702900257 #curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary @cpu_data.txt 查詢數據:(--data-urlencode "epoch=s" 指定時間序列 "chunk_size=20000" 指定查詢塊大小) # curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=ydb" --data-urlencode "q=select * from biaoge where xingqu='coding'" 數據分析: #curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=mydb" --data-urlencode "q=select mean(load) from cpu" #curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=mydb" --data-urlencode "q=select load from cpu" 能夠看到load的值分別是42 78 15.4;用mean(load)求出來的值爲45,13 curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=ydb" --data-urlencode "q=select mean(load) from cpu where host='server01'"
使用influxdb提供的web界面進行操做:
這裏只是簡單的介紹了influxdb的使用,後期若是想在grafana中匯聚並完美地展現數據,可能須要熟悉influxdb的各類查詢語法。(其實就是sql語句的一些使用技巧,聚合函數的使用,子查詢等等)
注意:原創著做,轉載請聯繫做者!