ELK安裝過程

 官方安裝文檔:https://www.elastic.co/guide/en/elasticsearch/reference/current/zip-targz.htmlhtml

官方硬件和配置項推薦:https://www.elastic.co/guide/en/elasticsearch/guide/master/hardware.html前端

 事件--->input---->codec--->filter--->codec--->outputjava

二、環境設置

系統:Centos7.4       IP地址:11.11.11.30
JDK:1.8
Elasticsearch-6.4.3
Logstash-6.4.0
kibana-6.4.0
註釋:截至2018年11月8日最新版爲6.4.3

iptables -F

2.一、前期準備

一、安裝JDK

JDK的下載地址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmlnode

[root@localhost ~]# tar zxvf jdk-8u152-linux-x64.tar.gz -C /usr/local/
解壓後配置全局變量
[root@localhost ~]# vi /etc/profile.d/jdk.sh
JAVA_HOME=/usr/local/jdk1.8
CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
[root@localhost ~]# source /etc/profile.d/jdk.sh

[root@localhost ~]# java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)

註釋:也能夠直接yum安裝java也是能夠的!!!  linux

 

二、修改全局參數

vi /etc/sysctl.conf 
vm.max_map_count=655360
sysctl -p

vi /etc/security/limits.conf
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
#如下兩個參數不是必須的,緣由在於配置第二臺elk的時候啓動elasticsearch的時候報JVM內存沒法分配內存錯誤
* soft memlock unlimited * hard memlock unlimited

三、建立elk用戶及下載所需文件

#建立運行ELK的用戶
useradd elk
passwd elk

#建立目錄存放軟件
[root@elk ~]# su - elk
[elk@elk ~]$ mkdir /home/elk/{Application,Data,Log}

註釋:  軟件安裝目錄:
/home/elk/Application 軟件數據目錄:/home/elk/Data 軟件日誌目錄:/home/elk/Log #下載並解壓kibana,elasticsearch,logstash到/home/elk/Application目錄下 下載地址:https://www.elastic.co/start #因最新版本下載過慢,下載以前同事已經下載好的...... #解壓文件 [elk@elk ~]$ cd /usr/local/src/ [elk@elk src]$ ll 總用量 530732 -rw-r--r--. 1 root root 97872736 11月 7 14:55 elasticsearch-6.4.3.tar.gz -rw-r--r--. 1 root root 10813704 11月 7 15:41 filebeat-6.4.0-x86_64.rpm -rw-r--r--. 1 root root 55751827 11月 7 15:41 kafka_2.11-2.0.0.tgz -rw-r--r--. 1 root root 187936225 11月 7 15:39 kibana-6.4.0-linux-x86_64.tar.gz -rw-r--r--. 1 root root 153887188 11月 7 15:41 logstash-6.4.0.tar.gz -rw-r--r--. 1 root root 37191810 11月 7 15:41 zookeeper-3.4.13.tar.gz
[elk@elk src]$ tar xf elasticsearch
-6.4.3.tar.gz -C /home/elk/Application/ [elk@elk src]$ tar xf kibana-6.4.0-linux-x86_64.tar.gz -C /home/elk/Application/ [elk@elk src]$ tar xf logstash-6.4.0.tar.gz -C /home/elk/Application/

[elk@elk src]$ ll /home/elk/Application/ 總用量 0 drwxr-xr-x. 8 elk elk 143 10月 31 07:22 elasticsearch-6.4.3 drwxrwxr-x. 11 elk elk 229 8月 18 07:50 kibana-6.4.0-linux-x86_64 drwxrwxr-x. 12 elk elk 255 11月 7 15:46 logstash-6.4.0 [elk@elk src]$

三、服務配置

3.一、配置elasticsearch

#內存大的話能夠根據需求要修改
[elk@elk Application]$ vi elasticsearch/config/jvm.options
-Xms8g
-Xmx8g

[elk@elk Application]$ vi elasticsearch/config/elasticsearch.yml
#集羣時使用的集羣名稱
cluster.name: my-es
#本節點名稱
node.name: node-1
#數據存儲的目錄
#path.data: /data/es-data  #多個目錄能夠用逗號隔開
path.data: /home/elk/Data/elasticsearch
#日誌存放目錄
path.logs: /home/elk/Log/elasticsearch
#數據不容許放入sawp分區
bootstrap.memory_lock: true
#本地監聽地址
network.host: 11.11.11.30
#設置集羣中master節點的初始列表,能夠經過這些節點來自動發現新加入的集羣節點
discovery.zen.ping.unicast.hosts: ["11.11.11.30", "11.11.11.31"]
#監聽端口
http.port: 9200
# 設置節點間交互的tcp端口(集羣),(默認9300)
transport.tcp.port: 9300
#增長參數,使head插件能夠訪問es
http.cors.enabled: true
http.cors.allow-origin: "*"
#看下本身修改過的配置
[elk@elk Application]$ grep '^[a-Z]' elasticsearch/config/elasticsearch.yml 

啓動elasticsearch(要使用elk這個用戶啓動)
nohup /home/elk/Application/elasticsearch/bin/elasticsearch >> /home/elk/Log/elasticsearch/elasticsearch.log 2>&1 &

日誌地址:
[elk@elk Application]$ tail /home/elk/Log/elasticsearch/my-es.log

 

3.1.一、部署第二臺elk,修改的配置文件以下,其餘跟elk配置一致

[elk@elk02 Application]$ grep '^[a-Z]' elasticsearch/config/elasticsearch.yml
cluster.name: my-es   #必定要跟第一臺一致
node.name: node-2    #必定要跟第一臺不一樣
path.data: /home/elk/Data/elasticsearch
path.logs: /home/elk/Log/elasticsearch
bootstrap.memory_lock: true
network.host: 11.11.11.31
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true 
http.cors.allow-origin: "*"

mkdir /home/elk/Log/elasticsearch/

啓動elasticsearch
nohup /home/elk/Application/elasticsearch/bin/elasticsearch >> /home/elk/Log/elasticsearch/elasticsearch.log 2>&1 &

 

 3.1.二、根據附錄中的1,安裝完head後的結果以下圖

 

 3.二、配置logstash(拓展請查看附錄3

這是收集數據,轉發數據,對日誌進行過濾ios

logstash官方說明:https://www.elastic.co/guide/en/logstash/current/introduction.htmlnginx

寫入插件:https://www.elastic.co/guide/en/logstash/current/input-plugins.htmlgit

輸出插件:https://www.elastic.co/guide/en/logstash/current/output-plugins.htmlgithub

 

[elk@elk Application]$ vi logstash/config/jvm.options
-Xms4g
-Xmx4g
[elk@elk Application]$ vi logstash/config/logstash.yml
#pipeline線程數,默認爲cpu核數
pipeline.workers:
8
#batcher一次批量獲取的待處理文檔數 pipeline.batch.size: 1000
#batcher等待時長,默認50ms pipeline.batch.delay: 50

 示例:web

#logstash啓動的時候要讀取的配置文件

[elk@elk config]$ cat logstash-sample.conf

#從哪獲取數據 input { file { path => ["/var/log/messages","/var/log/secure"] type => "system-log" start_position => "beginning" } }
#對進來的文件內容進行過濾 filter { }
#過濾後的數據發到哪裏 output { elasticsearch { hosts
=> ["http://11.11.11.30:9200"] index => "system-log-%{+YYYY.MM}" #user => "elastic" #password => "changeme" } }
註釋:若是日誌量不大的話,不建議使用天計算(
+YYYY.MM.dd);kikana到時很差添加!!!!

#啓動
[elk@elk ~]$ mkdir /home/elk/Log/logstash -p
nohup /home/elk/Application/logstash/bin/logstash -f /home/elk/Application/logstash/config/logstash-sample.conf --config.reload.automatic >> /home/elk/Log/logstash/logstash.log 2>&1 &
#查看是否運行

[elk@elk config]$ jobs -l
[1]+ 16289 運行中 nohup /home/elk/Application/logstash/bin/logstash -f /home/elk/Application/logstash/config/logstash-sample.conf --config.reload.automatic >> /home/elk/Log/logstash/logstash.log 2>&1 &


 

 

 1 [elk@elk Application]$ vi logstash/config/logstash.conf
 2 input {
 3     kafka {
 4          bootstrap_servers => "192.168.2.6:9090"
 5          topics => ["test"]
 6     }
 7 
 8 }
 9 filter{
10     json{
11         source => "message"
12     }
13 
14    ruby {
15         code => "event.set('index_day_hour', event.get('[@timestamp]').time.localtime.strftime('%Y.%m.%d-%H'))"
16    }
17    mutate {
18         rename => { "[host][name]" => "host" }
19    }
20    mutate {
21         lowercase => ["host"]
22    }
23 
24 
25 }
26 output {
27   elasticsearch {
28     index => "%{host}-%{app_id}-%{index_day_hour}"
29     hosts => ["http://192.168.2.7:9200"]
30   }
31 }
參考配置

 

 3.三、配置kibana(過濾file)

[elk@elk Application]$ vi kibana/config/kibana.yml
server.port: 15045    #頁面訪問端口
server.host: "11.11.11.30"   #監聽IP地址
elasticsearch.url: "http://11.11.11.30:9200"  
elasticsearch.pingTimeout: 1500

 

[root@elk ~]# grep '^[a-Z]' /home/elk/Application/kibana/config/kibana.yml
#服務啓動的端口 server.port:
15045
#監聽地址
server.host: "11.11.11.30" #elas端口,必定要對上
elasticsearch.url:
"http://11.11.11.30:9200" #kibana的日誌也寫入els內,這是個索引的名字,會在head上顯示
kibana.index:
".kibana" elasticsearch.pingTimeout: 1500 [root@elk ~]#

[root@elk ~]# mkdir /home/elk/Log/kibana -p
[root@elk ~]# nohup /home/elk/Application/kibana/bin/kibana >> /home/elk/Log/kibana/kibana.log 2>&1 &

[root@elk ~]# netstat -luntp
tcp 0 0 11.11.11.30:15045 0.0.0.0:* LISTEN 16437/node
tcp6 0 0 :::9100 :::* LISTEN 14477/grunt
tcp6 0 0 11.11.11.30:9200 :::* LISTEN 15451/java
tcp6 0 0 11.11.11.30:9300 :::* LISTEN 15451/java
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 16289/java

 註釋:星星是主,圓圈是副

node-2這臺主機重啓後新啓動了系統,可是在head顯示不出來,重啓下node-1的els就能夠看到了。。。。。!

 

 四、使用

4.一、添加索引

 

 4.二、logstash過濾java日誌

[root@elk ~]# cat /home/elk/Application/logstash/config/logstash.conf 
input {
  file {
      path => "/var/log/messages"
      type => "system-log"
      start_position => "beginning"
    }

  file {
     path => "/home/elk/Log/elasticsearch/my-es.log"
     type => "es-log"
     }
  file {
     path => "/home/elk/Log/logstash/logstash.log"
     type => "logstash-log"
    } 
}

output {
  if [type] == "system-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "system-log-%{+YYYY.MM}"
      #user => "elastic"
      #password => "changeme"
     }
  }
  if [type] == "es-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "es-log-%{+YYYY.MM}"
     }
   }

  if [type] == "logstash-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "logstash-log-%{+YYYY.MM}"
     }
   }
}

[root@elk ~]# 
[root@elk ~]# nohup /home/elk/Application/logstash/bin/logstash -f /home/elk/Application/logstash/config/logstash.conf --config.reload.automatic >> /home/elk/Log/logstash/logstash.log 2>&1 &

[root@elk ~]# netstat -luntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      874/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1112/master         
tcp        0      0 11.11.11.30:15045       0.0.0.0:*               LISTEN      16437/node          
tcp6       0      0 :::9100                 :::*                    LISTEN      14477/grunt         
tcp6       0      0 11.11.11.30:9200        :::*                    LISTEN      15451/java          
tcp6       0      0 11.11.11.30:9300        :::*                    LISTEN      15451/java          
tcp6       0      0 :::22                   :::*                    LISTEN      874/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1112/master         
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      20746/java          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           526/chronyd         
udp6       0      0 ::1:323                 :::*                                526/chronyd         
[root@elk ~]# 

 

 

 

 

 4.2.一、java日誌的處理(多行歸一行)

多行並一行,官方地址:https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

測試規則的可用性

[elk@elk config]$ cat codec.conf 
input {
    stdin {
      codec => multiline{
      pattern => "^\["
      negate => true
      what => "previous"
      }
   }
}

filter {

}

output {
     stdout {
       codec => rubydebug
    }
}
[elk@elk config]$ 
[root@elk logstash]# /home/elk/Application/logstash/bin/logstash -f /home/elk/Application/logstash/config/codec.conf

 

 

 用於處理真正的logstash配置:

#刪除已有的緩存,刪除後會從新收集
[root@elk ~]# find / -name .sincedb*
/home/elk/Application/logstash/data/plugins/inputs/file/.sincedb_97cbda73a2aaa9193a01a3b39eb761f3
/home/elk/Application/logstash/data/plugins/inputs/file/.sincedb_5227a954e2d5a4a3f157592cbe63c166
/home/elk/Application/logstash/data/plugins/inputs/file/.sincedb_452905a167cf4509fd08acb964fdb20c
[root@elk ~]# rm -rf /home/elk/Application/logstash/data/plugins/inputs/file/.sincedb*


#修改事後的文件以下
[elk@elk config]$ cat logstash.conf 
input {
  file {
      path => "/var/log/messages"
      type => "system-log"
      start_position => "beginning"
    }

  file {
     path => "/home/elk/Log/elasticsearch/my-es.log"
     type => "es-log"
     start_position => "beginning" codec => multiline{ pattern => "^\[" negate => true what => "previous" }
    }
}

output {
  if [type] == "system-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "system-log-%{+YYYY.MM}"
      #user => "elastic"
      #password => "changeme"
     }
  }
  if [type] == "es-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "es-log-%{+YYYY.MM}"
     }
   }

  if [type] == "logstash-log" {
    elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "logstash-log-%{+YYYY.MM}"
     }
   }
}

[elk@elk config]$ 

#使用root帳號啓動
[root@elk config]# nohup /home/elk/Application/logstash/bin/logstash -f /home/elk/Application/logstash/config/logstash.conf  >> /home/elk/Log/logstash/logstash.log 2>&1 &

#刷新head

 

 #els界面查看

 

 

 五、收集nginx日誌(轉json格式)

關鍵在於logstash收集到的日誌格式,必定要是json格式的

方法1:nginx配置日誌格式,而後輸入到els

    log_format  access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';

    access_log  /var/log/nginx/access_json.log access_log_json;

 

方法2:文件直接獲取,寫入Redis 而後再使用Python腳本讀取Redis,寫成Json後再寫入ELS

 5.一、安裝nignx並建立訪問日誌

[root@elk02 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@elk02 ~]# yum -y install nginx
[root@elk02 ~]# systemctl start nginx.service
[root@elk02 ~]# netstat -luntp
[root@elk02 ~]# ab -n 1000 -c 1 http://11.11.11.31/   #訪問一千次,併發爲1
[root@elk02 ~]# tail -5 /var/log/nginx/access.log 
11.11.11.31 - - [09/Nov/2018:18:09:08 +0800] "GET / HTTP/1.0" 200 3700 "-" "ApacheBench/2.3" "-"
11.11.11.31 - - [09/Nov/2018:18:09:08 +0800] "GET / HTTP/1.0" 200 3700 "-" "ApacheBench/2.3" "-"
11.11.11.31 - - [09/Nov/2018:18:09:08 +0800] "GET / HTTP/1.0" 200 3700 "-" "ApacheBench/2.3" "-"
11.11.11.31 - - [09/Nov/2018:18:09:08 +0800] "GET / HTTP/1.0" 200 3700 "-" "ApacheBench/2.3" "-"
11.11.11.31 - - [09/Nov/2018:18:09:08 +0800] "GET / HTTP/1.0" 200 3700 "-" "ApacheBench/2.3" "-"
[root@elk02 ~]# 
註釋:用戶真是IP 遠程用戶 時間 請求 狀態碼 發送數據大小 從哪跳轉過來的 **** 獲取代理的IP地址
[root@elk02 nginx]# vim nginx.conf
#添加以下日子規則******

log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';

access_log  /var/log/nginx/access.log access_log_json;

 

 

[root@elk02 nginx]# systemctl reload nginx
[root@elk02 ~]# ab -n 1000 -c 1 http://11.11.11.31/

[root@elk02 nginx]# tailf /var/log/nginx/access_json.log
{"user_ip":"-","lan_ip":"11.11.11.31","log_time":"2018-11-09T20:18:12+08:00","user_req":"GET / HTTP/1.0","http_code":"200","body_bytes_sent":"3700","req_time":"0.000","user_ua":"ApacheBench/2.3"}
{"user_ip":"-","lan_ip":"11.11.11.31","log_time":"2018-11-09T20:18:12+08:00","user_req":"GET / HTTP/1.0","http_code":"200","body_bytes_sent":"3700","req_time":"0.000","user_ua":"ApacheBench/2.3"}
{"user_ip":"-","lan_ip":"11.11.11.31","log_time":"2018-11-09T20:18:12+08:00","user_req":"GET / HTTP/1.0","http_code":"200","body_bytes_sent":"3700","req_time":"0.000","user_ua":"ApacheBench/2.3"}
{"user_ip":"-","lan_ip":"11.11.11.31","log_time":"2018-11-09T20:18:12+08:00","user_req":"GET / HTTP/1.0","http_code":"200","body_bytes_sent":"3700","req_time":"0.000","user_ua":"ApacheBench/2.3"}

 

 5.二、開始收集、分析

  5.2一、收集測試

#logstash服務啓動的端口是9600,若是已存在kill掉,使用這個nginx配置文件啓動
[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# vim nginx.conf 

input {
  file {
      path => "/var/log/nginx/access_json.log"
      codec => "json"
    }
}

output {
  stdout {
      codec => rubydebug
     }
}


[root@elk02 config]# ../bin/logstash -f nginx.conf

而後使用11.30這臺服務器過ad壓力測試
[root@elk config]# ab -n 10 -c 1 http://11.11.11.31/

在elk02即11.31上能夠看到以下效果

 5.2.二、寫入配置els內

這裏爲了方便直接修改nginx.conf這個文件內了,實際最好寫入到logstash.conf文件內

[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# cat nginx.conf 
input {
  file {
      type => "nginx-access-log"
      path => "/var/log/nginx/access_json.log"
      codec => "json"      
    }
}

output {
  elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "nginx-access-log-%{+YYYY.MM}"
     }


}
[root@elk02 config]# 
[root@elk02 config]# ../bin/logstash -f nginx.conf #運行logstash指定使用nginx.conf這個配置文件內。

  [root@elk ~]# ab -n 10 -c 1 http://11.11.11.31/   #必定要產生日誌才能被發現,由於是從文件末尾開始收集的;


 若是沒有顯示處理方法:

一、在原有的nginx.conf配置文件內添加以下:
input {
  file {
      type => "nginx-access-log"
      path => "/var/log/nginx/access_json.log"
      codec => "json"
    }
}

output {
  elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "nginx-access-log-%{+YYYY.MM}"
     } 
 stdout { codec => rubydebug }
             
}    

[root@elk02 config]# ../bin/logstash -f nginx.conf

 

   [root@elk ~]# ab -n 10 -c 1 http://11.11.11.31/

  

 二、找到它並刪除它從新獲取數據
[root@elk02 config]# find / -name .sincedb*

 

六、web頁面kibana的使用

6.一、搜索關鍵值

 6.二、視圖的用法

6.2.一、makdown的使用方法

6.2.二、統計某一值得方法

6.2.三、統計IP的訪問次數

 6.2.四、可視化平臺展現

 

 七、日誌收集

官方參考地址:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html

7.一、rsyslog系統日誌

一、測試是否能收集到syslog日誌{11.31}

[root@elk02 config]# cat syslog.conf 
input {
  syslog {
    type => "system-syslog"
    port => 514       
  }
}

output {
  stdout {
    codec => rubydebug
  }
}
[root@elk02 config]# 

 

二、運行配置後查看端口是否開啓{11.31}

[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# ../bin/logstash -f systlog.conf 
Sending Logstash logs to /home/elk/Application/logstash/logs which is now configured via log4j2.properties
[2018-11-13T11:19:49,206][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-13T11:19:53,753][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2018-11-13T11:20:08,746][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-13T11:20:10,815][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x64bb4375 run>"}
[2018-11-13T11:20:12,403][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:514"}
[2018-11-13T11:20:12,499][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:514"}
[2018-11-13T11:20:12,749][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-13T11:20:14,078][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

#查看是否有9600和514端口
[root@elk02 config]# netstat -luntp

 

三、編輯/etc/rsyslog.conf日誌{11.30}

#在最後添加以下一行
[root@elk ~]# tail -2 /etc/rsyslog.conf 
*.* @@11.11.11.31:514
# ### end of the forwarding rule ###
[root@elk ~]# 
註釋:裏邊有些地址後邊有橫槓,表明有新的消息先寫入內存緩存再寫入對應的目錄內。

#重啓rsys的服務
[root@elk ~]# systemctl restart rsyslog.service

#重啓完成後就能在11.31上看到有新的消息輸出了

 

 看下是不是實時輸出

 7.一、syslog實際部署

[root@elk02 config]# cat systlog.conf 
input {
  syslog {
    type => "system-syslog"
    port => 514       
  }
}

output {
  elasticsearch {
      hosts => ["http://11.11.11.31:9200"]
      index => "system-syslog-%{+YYYY.MM}"
    }  
}

[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# 

[root@elk02 config]# ../bin/logstash -f systlog.conf

註釋:而後看下11.31是否開啓了514端口用於接收數據

 

 

 

 

 

 

 

 

 

 

 

 

 

 

7.二、TCP日誌

 測試可行性

[root@elk02 config]# cat tcp.conf 
input {
  tcp {
      type => "tcp"
      port => "6666"
      mode => "server"
  }
}

output {
  stdout {
      codec => rubydebug
  }
}
[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# 
[root@elk02 config]# ../bin/logstash -f tcp.conf
註釋:這時就能夠查看是否有6666端口存在,存在則表明已經啓動了。

 

 

 寫入ES內

[root@elk02 config]# cat tcp.conf 
input {
  tcp {
      type => "tcp"
      port => "6666"
      mode => "server"
  }
}

output {
  elasticsearch {
      hosts => ["http://11.11.11.31:9200"]
      index => "system-syslog-%{+YYYY.MM}"
  }  
}
[root@elk02 config]# pwd
/home/elk/Application/logstash/config
[root@elk02 config]# 

 

 

 八、grok(httpd日誌)

參考地址:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

#查看httpd的日誌格式
[root@elk ~]# yum -y install httpd [root@elk ~]# systemctl start httpd.service [root@elk conf]# pwd /etc/httpd/conf [root@elk conf]# vim httpd.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common
註釋: 訪問IP地址 - 用戶 時間戳
http://httpd.apache.org/docs/current/mod/mod_log_config.html

 8.一、官方模板存參考位置:

/home/elk/Application/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns 下
[root@elk patterns]# ll
總用量 112
-rw-r--r--. 1 elk elk  1831 8月  18 08:23 aws
-rw-r--r--. 1 elk elk  4831 8月  18 08:23 bacula
-rw-r--r--. 1 elk elk   260 8月  18 08:23 bind
-rw-r--r--. 1 elk elk  2154 8月  18 08:23 bro
-rw-r--r--. 1 elk elk   879 8月  18 08:23 exim
-rw-r--r--. 1 elk elk 10095 8月  18 08:23 firewalls
-rw-r--r--. 1 elk elk  5338 8月  18 08:23 grok-patterns
-rw-r--r--. 1 elk elk  3251 8月  18 08:23 haproxy
-rw-r--r--. 1 elk elk   987 8月  18 08:23 httpd
-rw-r--r--. 1 elk elk  1265 8月  18 08:23 java
-rw-r--r--. 1 elk elk  1087 8月  18 08:23 junos
-rw-r--r--. 1 elk elk  1037 8月  18 08:23 linux-syslog
-rw-r--r--. 1 elk elk    74 8月  18 08:23 maven
-rw-r--r--. 1 elk elk    49 8月  18 08:23 mcollective
-rw-r--r--. 1 elk elk   190 8月  18 08:23 mcollective-patterns
-rw-r--r--. 1 elk elk   614 8月  18 08:23 mongodb
-rw-r--r--. 1 elk elk  9597 8月  18 08:23 nagios
-rw-r--r--. 1 elk elk   142 8月  18 08:23 postgresql
-rw-r--r--. 1 elk elk   845 8月  18 08:23 rails
-rw-r--r--. 1 elk elk   224 8月  18 08:23 redis
-rw-r--r--. 1 elk elk   188 8月  18 08:23 ruby
-rw-r--r--. 1 elk elk   404 8月  18 08:23 squid
[root@elk patterns]# 

 

 

8.二、測試拆分格式(重點查看上邊給的目錄下的 httpd文檔)

試拆分格式(重點查看官方文檔grok)

這個是參考的官方網頁文檔

[root@elk config]# pwd
/home/elk/Application/logstash/config
[root@elk config]# cat grok.conf 
input {
  stdin {}
}

filter {
  grok {
      match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
  }
}

output {
  stdout {
      codec => rubydebug
  }
}
[root@elk config]# 

 

 8.三、實際收集日誌測試

httpd的日誌類型爲默認類型

[root@elk config]# cat apache.conf 
input {
  file {
      path => "/var/log/httpd/access_log"
      start_position => "beginning"
  }
}

filter {
  grok {
      match => { "message" => "%{HTTPD_COMMONLOG}"}     
  }

}

output {
  stdout {
      codec => rubydebug
  }

}
[root@elk config]# pwd
/home/elk/Application/logstash/config
[root@elk config]# 
註釋:HTTPD_COMMONLOG 是在httpd文件內拷貝出來的,後邊已經定義了變量,直接引用便可!!!

 

 8.四、最終版(apache)

[root@elk config]# cat apache.conf 
input {
  file {
      path => "/var/log/httpd/access_log"
      start_position => "beginning"
      type => "apache-access-log"
  }
}

filter {
  grok {
      match => { "message" => "%{HTTPD_COMMONLOG}"}     
  }

}

output {
  elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "apache-access-log-%{+YYYY.MM}"
      }
}

[root@elk config]# pwd
/home/elk/Application/logstash/config
[root@elk config]# 



 

 

九、消息隊列(低耦合)

冗餘架構官方地址:https://www.elastic.co/guide/en/logstash/2.3/deploying-and-scaling.html#deploying-scaling

 

 

raid消息列隊output內的插件https://www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html

 

 9.一、測試logstash寫入radis

[root@elk02 ~]# yum -y install redis
[root@elk02 ~]# vim /etc/redis.conf
bind 11.11.11.31   #監聽地址
daemonize yes      #後臺運行

systemctl start redis
systemctl enable redis

[root@elk02 config]# netstat -luntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 11.11.11.31:6379        0.0.0.0:*               LISTEN      130702/redis-server 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      871/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1107/master         
tcp6       0      0 11.11.11.31:9200        :::*                    LISTEN      1309/java           
tcp6       0      0 11.11.11.31:9300        :::*                    LISTEN      1309/java           
tcp6       0      0 :::22                   :::*                    LISTEN      871/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1107/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           533/chronyd         
udp6       0      0 ::1:323                 :::*                                533/chronyd         



[root@elk02 ~]# cd /home/elk/Application/logstash/
[root@elk02 config]# vim radis.conf 
input {
  stdin {}
}

output {
  redis {
      host => "11.11.11.31"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "demo"
  }
}

[root@elk02 config]# ../bin/logstash -f radis.conf 
 Sending Logstash logs to /home/elk/Application/logstash/logs which is now configured via log4j2.properties
[2018-11-14T19:05:53,506][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-14T19:05:57,477][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2018-11-14T19:06:14,992][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-14T19:06:16,366][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6c86dc5a run>"}
The stdin plugin is now waiting for input:
[2018-11-14T19:06:16,772][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-14T19:06:18,719][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
heheh01
hehe02


[root@elk02 config]# redis-cli -h 11.11.11.31
11.11.11.31:6379> set name huangyanqi
OK
11.11.11.31:6379> info
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db6:keys=1,expires=0,avg_ttl=0
11.11.11.31:6379> SELECT 6   #進入db6
OK
11.11.11.31:6379[6]> keys *   #keys類型
1) "demo"
11.11.11.31:6379[6]> type demo   @demo格式
list
11.11.11.31:6379[6]> llen demo  #消息長度
(integer) 2
11.11.11.31:6379[6]> lindex demo -1   #查看最後一行
"{\"message\":\" heheh01\",\"@timestamp\":\"2018-11-14T11:06:46.034Z\",\"@version\":\"1\",\"host\":\"elk02\"}"
11.11.11.31:6379[6]>
View Code

 

9.二、 apache的日誌寫入redis內

應爲elk(11.30)已經安裝了http,就使用這臺作實驗了,radis安裝在了elk2(11.31)上

[root@elk ~]# cd /home/elk/Application/logstash/config/
[root@elk config]# vim apache.conf 

input {
  file {
      path => "/var/log/httpd/access_log"
      start_position => "beginning"
  }
}

output {
  redis {
      host => "11.11.11.31"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "apache-access-log"
  }
}

#這時須要訪問下11.30上的http服務,以便產生新的日誌,radis才能讀取到
#而後就能夠訪問radis了
[root@elk02 ~]# redis-cli -h 11.11.11.31   #登錄redis
11.11.11.31:6379> select 6   #進入庫6
OK
11.11.11.31:6379[6]> keys *   #查看哪些key值,生產千萬不能查全部key
1) "apache-access-log"
2) "demo"
11.11.11.31:6379[6]> type apache-access-log   #key的類型
list
11.11.11.31:6379[6]> llen apache-access-log   #key的長度,就多少條信息
(integer) 3
11.11.11.31:6379[6]> lindex  apache-access-log -1   #這個key的最後一條內容
"{\"message\":\"11.11.11.1 - - [16/Nov/2018:04:45:07 +0800] \\\"GET / HTTP/1.1\\\" 200 688 \\\"-\\\" \\\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36\\\"\",\"@timestamp\":\"2018-11-15T20:45:08.705Z\",\"@version\":\"1\",\"path\":\"/var/log/httpd/access_log\",\"host\":\"elk\"}"
11.11.11.31:6379[6]> 

 

9.三、把redis內的日誌寫入logstash內

raid消息列隊input內的插件  https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html

 

[root@elk02 ~]# cat redis_port.conf 
input {
  redis {
      host => "11.11.11.31"
      port => "6379"
      db =>"6"
      data_type => "list"
      key => "apache-access-log"
  }
}

filter {
  grok {
      match => { "message" => "%{HTTPD_COMMONLOG}"}     
  }
}

output {
  stdout {
      codec => rubydebug
  }
}
[root@elk02 ~]# 
[root@elk02 ~]# /home/elk/Application/logstash/bin/logstash -f redis_port.conf

 

 

9.四、實測

elk(11.30)安裝了httpd、logstash、els、kibans、head

elk02(11.31)安裝了logstash、els、redis

過程:在11.30上運行httpd使用logstash收集而後傳輸到11.31上的redis內;而後使用11.31上的logstash讀取後寫入11.30的els內。

 

[root@elk config]# cat apache.conf 
input {
  file {
      path => "/var/log/httpd/access_log"
      start_position => "beginning"
  }
}

output {
  redis {
      host => "11.11.11.31"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "apache-access-log"
  }
}

[root@elk config]# pwd
/home/elk/Application/logstash/config
[root@elk config]# 
[root@elk config]# ../bin/logstash -f apache.conf


[root@elk02 ~]# cat redis_port.conf 
input {
  redis {
      host => "11.11.11.31"
      port => "6379"
      db =>"6"
      data_type => "list"
      key => "apache-access-log"
  }
}

output {
elasticsearch {
      hosts => ["http://11.11.11.30:9200"]
      index => "access01-log-%{+YYYY.MM.dd}"
  }
}

[root@elk02 ~]# 
[root@elk02 ~]# /home/elk/Application/logstash/bin/logstash -f redis_port.conf

 

 十、生產建議後設置

需求分析:
    訪問日誌:apache訪問日誌、nginx訪問日誌、tomcat    file--》filter
    錯誤日誌:error log、java日誌                    直接收,java異常須要處理
    系統日誌:/var/log/*  syslong     rsyslog
    運行日誌: 程序寫的                file  json格式
    網絡日誌: 防火牆、交換機、路由器日誌   syslong
    
標準化: 日誌放哪(/data/logs/)  格式是什麼(JSON格式)  命名規則(access_log  error_log runtime_log 三個目錄)
    日誌怎麼切割(按天或小時)(access error crontab進行切分)
    全部原始文本---rsync到別處,後刪除最近三天前的日誌
    
工具化: 如何使用logstash進行收集方案

若是使用redis list 做爲ELKstack的消息隊列,那麼請對全部list key的長度進行監控
llen key_name
根據實際狀況,若是超過10萬就告警

 

 

 

#11.11.11.30上設置後輸入11.31上的redis上。
[root@elk config]# pwd
/home/elk/Application/logstash/config [root@elk config]# cat apache1.conf input { file { path => "/var/log/httpd/access_log" start_position => "beginning" type => "apache-2-access-log" } file { path => "/home/elk/Log/elasticsearch/my-es.log" type => "es-2-log" start_position => "beginning" codec => multiline{ pattern => "^\[" negate => true what => "previous" } } } filter { grok { match => { "message" => "%{HTTPD_COMMONLOG}"} } } output { if [type] == "apache-2-access-log" { redis { host => "11.11.11.31" port => "6379" db => "6" data_type => "list" key => "apache-2-access-log" } } if [type] == "es-2-log" { redis { host => "11.11.11.31" port => "6379" db => "6" data_type => "list" key => "es-2-log" } } } [root@elk config]#
[root@elk config]# ../bin/logstash -f apache.conf 

 

 

[root@elk02 ~]# cat redis_port.conf 
input {
  syslog {
      type => "system-2-syslog"
      port=> 514
    }
  redis {
      type => "apache-2-access-log"
      host => "11.11.11.31"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "apache-2-access-log"
  }
  redis {
      type => "es-2-log"
      host => "11.11.11.31"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "es-2-log"

  }  
}

filter {
  if [type] == "apache-2-access-log" {
      grok {
      match => { "message" => "%{HTTPD_COMMONLOG}"}     
    }
  }
}


output {
  if [type] == "apache-2-access-log" {
    elasticsearch {
        hosts => ["http://11.11.11.30:9200"]
        index => "apache-2-access-log-%{+YYYY.MM}"
    }
  }
  if [type] == "es-2-log" {
    elasticsearch {
        hosts => ["http://11.11.11.30:9200"]
        index => "es-2-log-%{+YYYY.MM.dd}"
    }   
  }
   if [type] == "system-2-syslog" {
     elasticsearch {
        hosts => ["http://11.11.11.30:9200"]
        index => "system-2-syslog-%{+YYYY.MM}"
    }
  }

}
[root@elk02 ~]# 

 

 

 

 

 

 

 

 

 

附錄:

一、繼3.1 啓動elasticsearch以後安裝的插件(方法)

https://www.cnblogs.com/Onlywjy/p/Elasticsearch.html

  1.一、安裝haed插件,啓動./elasticsearch以後

  • 下載head插件
wget -O /usr/local/src/master.zip   https://github.com/mobz/elasticsearch-head/archive/master.zip
  • 安裝node
wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
http://nodejs.org/dist/v4.4.7/ node下載地址
tar -zxvf node-v4.4.7-linux-x64.tar.gz
  • 配置下環境變量,編輯/etc/profile添加
#set node envirnoment
export NODE_HOME=/usr/local/src/node-v4.4.7-linux-x64 export PATH=$PATH:$NODE_HOME/bin export NODE_PATH=$NODE_HOME/lib/node_modules

執行 source /etc/profile

 

  • 安裝grunt

grunt是基於Node.js的項目構建工具,能夠進行打包壓縮、測試、執行等等的工做,head插件就是經過grunt啓動

unzip master.zip
[root@elk src]# cd elasticsearch-head-master/
npm install -g grunt-cli //執行後會生成node_modules文件夾
[root@elk elasticsearch-head-master]# npm install -g grunt-cli
npm WARN engine atob@2.1.2: wanted: {"node":">= 4.5.0"} (current: {"node":"4.4.7","npm":"2.15.8"})
/usr/local/src/node-v4.4.7-linux-x64/bin/grunt -> /usr/local/src/node-v4.4.7-linux-x64/lib/node_modules/grunt-cli/bin/grunt
grunt-cli@1.3.2 /usr/local/src/node-v4.4.7-linux-x64/lib/node_modules/grunt-cli
├── grunt-known-options@1.1.1
├── interpret@1.1.0
├── v8flags@3.1.1 (homedir-polyfill@1.0.1)
├── nopt@4.0.1 (abbrev@1.1.1, osenv@0.1.5)
└── liftoff@2.5.0 (flagged-respawn@1.0.0, extend@3.0.2, rechoir@0.6.2, is-plain-object@2.0.4, object.map@1.0.1, resolve@1.8.1, fined@1.1.0, findup-sync@2.0.0)
[root@elk elasticsearch-head-master]# grunt -version
grunt-cli v1.3.2
grunt v1.0.1

修改head插件源碼
#修改服務器監聽地址:Gruntfile.js

 

 #修改鏈接地址
[root@elk elasticsearch-head-master]# vim _site/app.js

 

 #設置開機啓動

[root@elk ~]# cat es_head_run.sh
PATH=$PATH:$HOME/bin:/usr/local/src/node-v4.4.7-linux-x64/bin/grunt
export PATH
cd /usr/local/src/elasticsearch-head-master/
nohup npm run start >/usr/local/src/elasticsearch-head-master/nohup.out 2>&1 &


 

 [root@elk ~]# vim /etc/rc.d/rc.local
/usr/bin/sh /root/es_head_run.sh 2>&1
 

 

一、離線安裝插件方法    https://www.elastic.co/guide/en/marvel/current/installing-marvel.html#offline-installation

二、插件收索方法: 還有kopf (記獲得裏邊查看版本兼容性)

[root@elk elasticsearch]# ./bin/elasticsearch-plugin install lukas-vlcek/bigdesk

 

二、健康集羣是否正常(狀態)

[elk@elk ~]$ curl -XGET 'http://11.11.11.30:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "my-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[elk@elk ~]$

提供更好的健康檢查方法:    https://www.elastic.co/guide/en/elasticsearch/guide/current/_cat_api.html

[elk@elk ~]$ curl -XGET 'http://11.11.11.30:9200/_cat/health?pretty=true'
1541662928 15:42:08 my-es green 2 2 0 0 0 0 0 0 - 100.0%
[elk@elk ~]$ 

 

 

 三、配置logstash後續測試

參考文檔:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

寫入插件:https://www.elastic.co/guide/en/logstash/current/input-plugins.html

輸出插件:https://www.elastic.co/guide/en/logstash/current/output-plugins.html

這裏選輸出elasticsearch插件    : https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

[root@elk ~]# /home/elk/Application/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["11.11.11.30:9200"] index => "logstash-%{+YYYY.MM.dd}" } }'
...中間會出現一堆東西,不用管...
[2018-11-08T16:28:05,537][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
hehe
hehehehe

#數據已經寫入els上了,能夠在head頁面內查看寫入的數據

 

四、線上集羣的被建議作法

1、每有一個ES上都運行一個kibanaweb
2、每一個kibana都鏈接本身本地的ES
3、前端Nginx負載均衡+驗證

 

五、Kafka消息隊列(待續)

https://www.unixhot.com/article/61

https://www.infoq.cn/

 

六、另外一個能夠代替head的插件

軟件地址: https://github.com/lmenezes/cerebro/tags

下載到系統內、解壓後
[root@elk cerebro-0.8.1]# ./bin/cerebro -Dhttp.port=1234 -Dhttp.address=11.11.11.30
相關文章
相關標籤/搜索