三臺虛擬機 193,194,195 本機 78java
流程 pythonserver -> nginx -> logstash_shipper->kafka->logstash_index->es->kibanapython
pythonserver 經過tcp,udp 發送log到 kakfanginx
nginx 產生的log有logstash_shiper收集併發送到 kafkavim
logstash_index 獲取kafka中數據 傳送到 es 集羣 並filter數據ruby
kibana 展示 es數據併發
安裝配置記錄
1 原文件lib拷貝app
mkdir -p /home/eamon/elk
cd /home/eamon/elk/
scp -r eamon@192.168.6.78:/home/eamon/study/elk/lib .dom
2 配置jdk JAVA_HOME
vim /etc/environmentelasticsearch
JAVA_HOME="/home/eamon/elk/lib/jdk1.8.0_60"
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:$JAVA_HOME/bin"tcp
驗證 source environment java javac $JAVA_HOME
3 配置nginx
安裝
# sudo apt-get install nginx
我的強烈建議打開 Nginx 的 access_log 配置項的 buffer 參數,對極限響應性能有極大提高!(???TODO)
# sudo vim nginx.conf
日誌格式定義
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" ';
反向代理
# sudo vim .sites-enabled/helloconf
# You may add here your
upstream t {
server 127.0.0.1:8005 weight=5;
}
server {
listen 80;
server_name 192.168.6.194;
location / {
proxy_pass http://t;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_valid all 1m;
}
}
4 配置logstash logstash-1.5.4 依賴jdk8
測試
# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
index.conf------------------------------
input {
kafka {
zk_connect => "localhost:2181"
group_id => "logstash"
topic_id => "test"
codec => plain
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
filter {
grok {
match => [ "message", "%{IPORHOST:source_ip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}" ]
}
}
output {
elasticsearch {
host => "192.168.6.194"
protocol => "http"
workers => 5
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
template_overwrite => false
}
}
shipper.conf-------------------------------
input {
file{
path => "/var/log/nginx/*.log"
#start_position => beginning
}
}
filter {
if [path] =~ "access" {
mutate { replace => { type => "access" }
} else if [path] =~ "error" {
mutate { replace => { type => "error" } }
} else {
mutate { replace => { type => "random_logs" } }
}
grok {
match => [ "message", "%{IPORHOST:source_ip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}" ]
}
}
output {
stdout {
codec => rubydebug
}
kafka {
broker_list => "localhost:9092"
topic_id => "test"
compression_codec => "snappy"
workers => 1
}
}
udp tcp -----------------------------------------
input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}
測試 telnet localhost 5000
5 配置es elasticsearch-1.7.2
修改集羣名 同網段內名稱惟一 ,一個網段內的全部名稱相同的自動成爲一個集羣
# vim config/elasticsearch.yml
cluster.name: eamones
測試
http://192.168.6.194:9200/_count?pretty
6 配置kafka kafka_2.10-0.8.2.2
6.1 配置zookeeper
$ vim config/zookeeper.properties
tickTime=2000
dataDir=/data/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.10:2888:3888
server.2=192.168.0.11:2888:3888
server.3=192.168.0.12:2888:3888
# vim server.properties
broker.id=1
設置測試kafka
# create "logstash_logs" topic
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic logstash_logs
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
$ bin/logstash -e "input { stdin {} } output { kafka { topic_id => 'logstash_logs' } }"
7 配置kibana
修改kibana.yml
url
測試 http://192.168.6.194:5601/