ElasticSearch,LogStash均是java程序。因此需要jdk環境。php
需要注意的是。多節點通信,必須保證JDK版本號一致。否則可能會致使鏈接失敗。html
下載:jdk-7u71-linux-x64.rpm
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html java
rpm -ivh jdk-7u71-linux-x64.rpmnode
配置JDK
編輯/etc/profile文件,在開頭添加: mysql
export JAVA_HOME=/usr/java/jdk1.7.0_71
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
檢查JDK環境
使用source /etc/profile命令。使環境變量立刻生效。
查看當前安裝的JDK版本號。命令:java -version
檢查環境變量,echo $PATHlinux
elasticsearch是一個搜索引擎。負責存儲日誌內容。web
下載安裝
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.4.4.tar.gz
tar zxvf elasticsearch-1.4.4.tar.gzredis
改動config/elasticsearch.yml配置文件sql
bootstrap.mlockall: true
index.number_of_shards: 1 index.number_of_replicas: 0 #index.translog.flush_threshold_ops: 100000 #index.refresh_interval: -1 index.translog.flush_threshold_ops: 5000 index.refresh_interval: 1 network.bind_host: 172.16.18.114 #節點間通信公佈到其餘節點的IP地址 #假設不設置由ES本身決定它可能會發現一個地址。但是其餘節點可能訪問不了,這樣節點間通信將失敗 network.publish_host: 172.16.18.114 # Security 贊成所有http請求 http.cors.enabled: true http.cors.allow-origin: "/.*/"
# 使jvm使用os。max-open-files
es_parms="-Delasticsearch -Des.max-open-files=ture"
# Start up the service
# 改動OS打開最大文件數
ulimit -n 1000000
ulimit -l unlimited
launch_service "$pidfile" "$daemonized" "$properties"
......
if [ "x$ES_MIN_MEM" = "x" ]; then
ES_MIN_MEM=256m
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
ES_MAX_MEM=1g
fi
if [ "x$ES_HEAP_SIZE" != "x" ]; then
ES_MIN_MEM=$ES_HEAP_SIZE
ES_MAX_MEM=$ES_HEAP_SIZE
fi
#set min memory as 2g
ES_MIN_MEM=2g
#set max memory as 2g
ES_MAX_MEM=2g
......
執行
./bin/elasticsearch -d
./logs下爲日誌文件數據庫
檢查節點狀態
curl -XGET ‘http://localhost:9200/_nodes?os=true&process=true&pretty=true’
{
"cluster_name" : "elasticsearch",
"nodes" : {
"7PEaZbvxToCL2O2KuMGRYQ" : {
"name" : "Gertrude Yorkes",
"transport_address" : "inet[/172.16.18.116:9300]",
"host" : "casimbak",
"ip" : "172.16.18.116",
"version" : "1.4.4",
"build" : "c88f77f",
"http_address" : "inet[/172.16.18.116:9200]",
"settings" : {
"index": {
"number_of_replicas": "0",
"translog": {
"flush_threshold_ops": "5000"
},
"number_of_shards": "1",
"refresh_interval": "1"
},
"path" : {
"logs" : "/home/jfy/soft/elasticsearch-1.4.4/logs",
"home" : "/home/jfy/soft/elasticsearch-1.4.4"
},
"cluster" : {
"name" : "elasticsearch"
},
"bootstrap" : {
"mlockall" : "true"
},
"client" : {
"type" : "node"
},
"http" : {
"cors" : {
"enabled" : "true",
"allow-origin" : "/.*/"
}
},
"foreground" : "yes",
"name" : "Gertrude Yorkes",
"max-open-files" : "ture"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 13896,
"max_file_descriptors" : 1000000,
"mlockall" : true
},
...
}
}
}
代表ElasticSearch已執行。狀態與配置相符
"index": {
"number_of_replicas": "0",
"translog": {
"flush_threshold_ops": "5000"
},
"number_of_shards": "1",
"refresh_interval": "1"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 13896,
"max_file_descriptors" : 1000000,
"mlockall" : true
},
安裝head插件操做elasticsearch
elasticsearch/bin/plugin -install mobz/elasticsearch-head
http://172.16.18.116:9200/_plugin/head/
安裝marvel插件監控elasticsearch狀態
elasticsearch/bin/plugin -i elasticsearch/marvel/latest
http://172.16.18.116:9200/_plugin/marvel/
logstash一個日誌收集處理過濾程序。
LogStash分爲日誌收集端進程和日誌處理端進程,收集端負責收集多個日誌文件實時的將日誌內容輸出到redis隊列緩存。處理端負責將redis隊列緩存中的內容輸出到ElasticSarch中存儲。
收集端進程執行在產生日誌文件的服務器上,處理端進程執行在redis,elasticsearch同一服務器上。
下載
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
redis安裝配置
make
make PREFIX=/usr/local/redis install
要注意監控redis隊列長度,假設長時間堆集說明elasticsearch出問題了
每2S檢查一下redis中數據列表長度,100次
redis-cli -r 100 -i 2 llen logstash:redis
配置Logstash日誌收集進程
vi ./lib/logstash/config/shipper.conf
input {
#file {
# type => "mysql_log"
# path => "/usr/local/mysql/data/localhost.log"
# codec => plain{
# charset => "GBK"
# }
#}
file {
type => "hostapd_log"
path => "/root/hostapd/hostapd.log"
sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hostapd.access"
#start_position => "beginning"
#http://logstash.net/docs/1.4.2/codecs/plain
codec => plain{
charset => "GBK"
}
}
file {
type => "hkt_log"
path => "/usr1/app/log/bsapp.tr"
sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hkt.access"
start_position => "beginning"
codec => plain{
charset => "GBK"
}
}
# stdin {
# type => "hostapd_log"
# }
}
#filter {
# grep {
# match => [ "@message", "mysql|GET|error" ]
# }
#}
output {
redis {
host => '172.16.18.116'
data_type => 'list'
key => 'logstash:redis'
# codec => plain{
# charset => "UTF-8"
# }
}
# elasticsearch {
# #embedded => true
# host => "172.16.18.116"
# }
}
執行收集端進程
./bin/logstash agent -f ./lib/logstash/config/shipper.conf
配置Logstash日誌處理進程
vi ./lib/logstash/config/indexer.conf
input {
redis {
host => '127.0.0.1'
data_type => 'list'
key => 'logstash:redis'
#threads => 10
#batch_count => 1000
}
}
output {
elasticsearch {
#embedded => true
host => localhost
#workers => 10
}
}
執行處理端進程
./bin/logstash agent -f ./lib/logstash/config/indexer.conf
處理端從redis讀出緩存的日誌內容,輸出到ElasticSarch中存儲
kibana是elasticsearch搜索引擎的web展現界面,一套在webserver下的js腳本,可以定製複雜的查詢過濾條件檢索elasticsearch,並以多種方式(表格,圖表)展現。
下載
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
解壓後將kibana文件夾放到webserver能訪問到的地方
配置
改動kibana/config.js:
假設kibana與elasticsearch不在同一機器則改動:
elasticsearch: "http://192.168.91.128:9200",
#這裏其實是瀏覽器直接訪問該地址鏈接elasticsearch
不然默認。必定不要改動
假設出現connection failed,則改動elasticsearch/config/elasticsearch.yml。添加:
http.cors.enabled: true
http.cors.allow-origin: "/.*/"
詳細含義參見:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html
訪問kibana
http://172.16.18.114:6090/kibana/index.html#/dashboard/file/logstash.json
配置kibana界面 可以filtering中配置需要訪問的日誌type,如_type=voip_log,相應着上面logstash中shipper.conf的type 也可以點擊右上角的save將當前界面的配置保存到elasticsearch中,默認是保存在kibana-int索引中