虛擬機建立及安裝ELKphp |
做者:高波html 歸檔:學習筆記java 2018年5月31日 13:57:02node |
快捷鍵:mysql Ctrl + 1 標題1linux Ctrl + 2 標題2nginx Ctrl + 3 標題3git Ctrl + 4 實例程序員 Ctrl + 5 程序代碼github Ctrl + 6 正文
|
格式說明: 藍色字體:註釋 ctrl + / 黃色背景:重要 ctrl + i 綠色背景:注意 ctrl+shift+ w |
目 錄
第1章... 1
[root@localhost ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL=https://bugs.centos.org/
主機名設置
hostnamectl set-hostname elk
[root@elk ~]# hostnamectl
Static hostname: elk
Icon name: computer-vm
Chassis: vm
Machine ID: d1d80bc30b414ba7a6e5e49906699d7d
Boot ID: 49488ed1b1434c8aa06fca343bf67ccf
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.el7.x86_64
Architecture: x86-64
系統ip
[root@elk ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:32:12:d0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::a7f2:2fd2:eb18:3361/64 scope link
valid_lft forever preferred_lft forever
1 rm -rf /var/cache/yum/*
2 curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
3 curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
4 yum install -y wget tree lrzsz vim bash-completion
關閉防火牆及selinux
1 systemctl disable firewalld
2 systemctl stop firewalld
3 vi /etc/selinux/config
4 setenforce 0
Elasticsearch是基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文檢索引擎,基於 RESTful web接口。Elasticsearch是用java開發的,並做爲Apache許可條款下的開放源碼發佈,是第二流行的企業搜索引擎。設計用於雲計算中,可以達到實時搜索,穩定,可靠,快速,安裝使用方便。
NRT
elasticsearch是一個近似實時的搜索平臺,從索引文檔到可搜索有些延遲,一般爲1秒。
集羣
集羣就是一個或多個節點存儲數據,其中一個節點爲主節點,這個主節點是能夠經過選舉產生的,並提供跨節點的聯合索引和搜索功能。集羣有一個惟一標誌的名字,默認是elasticsearch,集羣名字很重要,每一個節點都是基於名字加入到集羣中的。所以,確保在不一樣環境中使用不一樣的集羣名字。一個集羣能夠只有一個節點。強烈建議在配置elasticsearch時,配置成集羣模式。
節點
節點就是一臺單獨的服務器,是集羣的一部分,存儲數據關參與集羣的索引和搜索功能。像集羣同樣,節點經過名字來標識,默認在節點啓動時隨機分配的字符名。固然啦,你能夠本身定義。該名字也蠻重要的,在集羣中用於識別服務器對應的節點。
節點能夠經過指定集羣名字來加入到集羣中。默認狀況下,每一個節點被設置成加入elasticsearch集羣。若是啓動多個節點,假設能自動發現對方,他們將會自動組建一個名爲elasticsearch的集羣。
索引
索引是幾分類似屬性的文檔集合。好比nginx日誌索引,syslog索引等。索引是由名字標識,名字必須所有小寫。這個名字用來進行索引,搜索,更新和刪除文檔的操做。
索引相對於關係型數據庫的庫
類型
在一個索引中,能夠定義一個或多個類型。類型是一個邏輯類型仍是一個分區徹底取決於你。一般狀況下,一個類型被定義於具備一組共同字段的文檔。如ttsla運維生成時間全部的數據存入在一個單一名爲logstash-ttlsa的索引中,同時,定義了用戶數據類型,帖子數據類型和評論類型。
類型相對於關係型數據庫的表。
文檔
文檔是信息的基本單元,能夠被索引的。文檔是以JSON格式顯示的
在類型中,能夠根據需求存儲多個文檔
雖然一個文檔在物理上位於一個索引,實際上一個文檔必須在一個索引內被索引和分配一個類型。
文檔相對於關係型數據庫的列
分片和副本
在實際狀況下,索引存儲的數據可能超過單個節點的硬件限制。如一個10億文檔須要1T空間可能不適合存儲在單個節點的硬盤上,或者單個節點搜索請求太慢。爲了解決這個問題,elasticsearch提供將索引分紅多個分片的功能。當在建立索引時,能夠定義想要分片的數量。每個分片就是一個所有功能的獨立索引,能夠位於集羣中的任何節點上。
分片的兩個最主要緣由:
分佈式分片機制和搜索請求的文檔如何彙總徹底是有elasticsearch來控制的,這些對用戶而言是透明的。
網絡問題等等其餘問題能夠在任什麼時候候不期而至,爲了健壯性,強烈建議要有一個故障切換機制,不管何種故障以防止分片或節點不可用。
爲此,elasticsearch讓咱們將索引分片複製一份爲多份,稱之爲分片副本或副本。
副本也有兩個主要緣由:
高可用性,以應對分片或節點故障。出於這個緣由,分片副本要在不一樣的節點上。
提供性能,增大吞吐量,搜索能夠並行在全部副本上執行。
總之,每一個索引能夠被分紅多個分片。索引也能夠有0個或多個分片。索引也能夠有0個或多個副本。複製後,每一個索引都有主分片(母分片)和複製分片(複製於母分片)。分片和副本數量能夠在每一個索引被建立時定義。建立索引後,能夠任什麼時候候動態的修改副本數量,可是不能更改分片數。
默認狀況下,elasticsearch爲每一個索引5個主分片和1個副本,這就意味着集羣至少須要兩個節點。索引將會有5個主分片和5個副本(1個完整副本),每一個索引總共有10個分片。
每一個elasticsearch分片是一個Lucene索引。一個單個Lucene索引有最大的文檔數LUCENE-5843, 文檔數限制爲2147483519(MAX_VALUE – 128)。 可經過_cat/shards來監控分片大小。
LogStash由Jruby語言編寫,基於消息(message-based)的簡單架構,並運行在java虛擬機(JVM)上。不一樣於分離的代理端(agent)和主機端(server),LogStash可配置單一的代理端(agent)與其餘開源軟件結合,以實現不一樣的功能。
Shipper:發送事件(events)至LogStash;一般,遠程代理端(agent)只須要運行這個組件便可;
Broker and Indexer:接收並索引化事件;
Search and Storage:容許對事件進行搜索和存儲;
Web Interface:基於Web的展現界面
正是因爲以上組件在LogStash架構中的獨立部署,才提供了更好的集羣擴展性。
代理主機(agent host):做爲事件的傳遞者(shipper),將各類數據日誌發送至中心主機;只需運Logstash代理(agent)程序;
中心主機(central host):可運行包括中間轉發器(Broker)、索引器(Indexer)、搜索和存儲器(Search and Storage)、web界面端(web Interface)在內的各個組件,以實現對日誌數據的接收、處理和存儲。
Logstash是一個徹底開源的工具,他能夠對你的日誌進行收集、分析,並將其存儲供之後使用(如搜索),你可使用它。說到搜索,logstash帶有一個web界面,搜索和展現全部日誌。
開發人員不能登陸線上服務器查看詳細日誌
各個系統都有日誌,日誌數據分散難以查找
日誌數量大,查詢速度慢,或者數據不夠實時
1臺虛擬機:
hostname:elk
ip地址:10.0.0.11
[root@linux-node1 ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@linux-node1 ~]# uname -a
Linux linux-node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@linux-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 linux-node1
yum install -y nginx java redis
[root@localhost ~]# java -version
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-b10)
OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)
[root@localhost ~]# which java
/usr/bin/java
提示:確保java爲1.8以上版本,而且保證java在環境變量中能找到
cd /root
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm
rpm -ivh elasticsearch-6.2.4.rpm
啓動系統管理
systemctl daemon
從新載入系統初始
[root@elk ~]# systemctl daemon-reload
設置系統開機啓動ES
[root@elk ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
啓動ES服務
[root@elk ~]# systemctl start elasticsearch.service
檢查ES服務狀態
[root@elk ~]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-05-31 02:35:39 EDT; 7s ago
Docs: http://www.elastic.co
Main PID: 12766 (java)
CGroup: /system.slice/elasticsearch.service
└─12766 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccu...
May 31 02:35:39 elk systemd[1]: Started Elasticsearch.
May 31 02:35:39 elk systemd[1]: Starting Elasticsearch...
May 31 02:35:40 elk elasticsearch[12766]: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to incr...reads=N
Hint: Some lines were ellipsized, use -l to show in full.
使用curl命令檢查9200端口是否返回json數據
[root@elk elasticsearch]# curl localhost:9200
{
"name" : "KOp9XyC",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "2LJAtbjwTZKaxJMVWzdXaA",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
使用.tar.gz壓縮包方式安裝elasticearch
cd /opt
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
tar -xzf elasticsearch-6.2.4.tar.gz
cd elasticsearch-6.2.4/
mkdir /home/esdata -p
mkdir /home/esdata/data -p
mkdir /home/esdata/logs -p
[root@elk ~]# tail -2 /etc/profile
export ES_HOME=/opt/elasticsearch-6.2.4
[root@elk ~]# source /etc/profile
[root@elk ~]# echo $ES_HOME
/opt/elasticsearch-6.2.4
#提示將ES安裝目錄設置爲環境變量 $ES_HOME
運行ES
cd /opt/elasticsearch
./bin/elasticsearch
默認地,ES在前臺運行,打印輸出日誌可使用Ctrl+C來中止
修改elasticsearch並受權
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
17:cluster.name:chunck-cluster
#判別節點是不是統一集羣
23:node.name: linux-node1
#節點的hostname
33:path.data: /data/es-data
#數據存放路徑
37:path.logs: /var/log/elasticsearch/
#日誌路徑
43:bootstrap.memory_lock: true
#鎖住內存,使內存不會再swap中使用
54:network.host: 0.0.0.0
#容許訪問的ip
58:http.port: 9200
#端口
可複製代碼
[root@linux-node1 ~]# grep '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
cluster.name:chunck-cluster
node.name: linux-node1
path.data: /data/es-data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
[root@linux-node1 ~]# mkdir -p /data/es-data
[root@linux-node1 ~]# chown elasticsearch.elasticsearch /data/es-data/
啓動elasticsearch
systemctl enable-reload
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl status elasticsearch
查看elasticsearch環境
[root@elk1 ~]# curl -i -XGET 'http://10.0.0.11:9200'
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 433
{
"name" : "elk1",
"cluster_name" : "my-application",
"cluster_uuid" : "1XjdbJM8THeVxluBk8GkIw",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
node client
Transport client
在ES6.x系列被廢棄
json 格式 數據
能夠經過curl 命令或瀏覽器請求顯示結果
curl -i -XGET 'http://10.0.0.11:9200/_count?pertty' -d ' {"query":{"match_all":{}}}'
-i, --include (包含響應頭)
(HTTP) Include the HTTP-header in the output. The
HTTP-header includes things like server-name,
date of the document, HTTP-version and more...
js,.NET,PHP,Perl,Python,Ruby
博客地址
https://www.cnblogs.com/Onlywjy/p/Elasticsearch.html
追加到ES配置文件,與head插件關聯
[root@elk elasticsearch]# tail -2 /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
重啓ES服務
[root@elk elasticsearch]# systemctl restart elasticsearch.service
查看端口
[root@elk elasticsearch]# ss -ltnup | grep "9200|9300" -E
tcp LISTEN 0 128 ::ffff:10.0.0.11:9200 :::* users:(("java",pid=13226,fd=119))
tcp LISTEN 0 128 ::ffff:10.0.0.11:9300 :::* users:(("java",pid=13226,fd=111))
cd /opt
下載head插件
wget https://github.com/mobz/elasticsearch-head/archive/master.zip
yum install -y unzip
unzip master.zip
wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
tar -zxvf node-v4.4.7-linux-x64.tar.gz
vim /etc/profile
#set node envirnoment
export NODE_HOME=/opt/node-v4.4.7-linux-64
export PATH=$PATH:$NODE_HOME/bin
export NODE_PATH=$NODE_HOME/lib/node_node_modules
執行
source /etc/profile
grunt 是基於Node.js的項目構建工具,能夠進行打包壓縮,測試,執行等的工做,head插件就是經過grunt啓動
yum install -y npm
cd /op/elasticsearch-head-master
npm install -g grunt-cli grunt
提示若是上一步報錯了,不用擔憂,繼續下一步
檢查是否安裝成功
[root@elk1 ~]# grunt -version
grunt-cli v1.2.0
修改服務器監聽地址:Gruntfile.js
在文件的第93行,修改端口爲9100
以下所示
connect: {
91 server: {
92 options: {
93 port: 9100,
94 base: '.',
95 keepalive: true
96 }
97 }
98 }
修改當前 _site/app.js
在第4354行,將主機地址修改成當前主機地址
4351 init: function(parent) {
4352 this._super();
4353 this.prefs = services.Preferences.instance();
4354 this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.0.0.11:9200";
4355 if( this.base_uri.charAt( this.base_uri.length - 1 ) !== "/" ) {
4356 // XHR request fails if the URL is not ending with a "/"
4357 this.base_uri += "/";
4358 }
在/opt/elasticsearch-head-master 目錄下
執行如下命令 初始化安裝時,僅執行一次,服務開啓一次,就不須要執行了
npm install
提示此處出現錯誤,不須要理會,繼續下一步
此處是啓動真正的es-head 插件服務,放入後臺執行
grunt server >> es-9100.log &
瀏覽器訪問以下圖所示
修改配置文件易報錯提示信息
https://blog.csdn.net/kellerxq/article/details/51392507
elasticsearch的服務器響應異常及應對策略(錯誤解決方案)
http://www.javashuo.com/article/p-fveblkvw-ex.html
can not run elasticsearch as root
兩臺服務都要保護一致
修改內存鎖配置文件
vim /etc/security/limits.conf
追加信息
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
重啓elasticsearch 服務
[root@elk2 ~]# ulimit -a | grep 'open files'
open files (-n) 1024
提示:生產環境要打開的文件不少,須要調整打開文件的數據,儘可能改成65536
將組播地址修改成單播地址
centos7.x中組播有問題,等待查詢問題
推薦使用單播,生產使用單播
安裝說明:
參考上面的鏈接:
操做提示:
安裝
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.rpm
rpm -ivh logstash-6.2.4.rpm
啓動 查看狀態 及中止
systemctl start logstash.service
systemctl status logstash.service
systemctl stop logstash.service
logstash安裝目錄爲: /usr/share/logstash
啓動一個logstash,-e:在命令行執行;input輸入,stdin標準輸入,是一個插件;output輸出,stdout:標準輸出
cd /usr/share/logstash
[root@elk1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
hello,bjgs (#輸入內容,要等一會)
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-05-25 03:11:17.711 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-05-25 03:11:17.765 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-05-25 03:11:18.844 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-05-25 03:11:19.351 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
[INFO ] 2018-05-25 03:11:19.747 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-05-25 03:11:20.953 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
The stdin plugin is now waiting for input:
[INFO ] 2018-05-25 03:11:21.063 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xb461d16@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
[INFO ] 2018-05-25 03:11:21.129 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
#顯示結果
{
"message" => "hello,bjgs",
"@version" => "1",
"@timestamp" => 2018-05-25T07:11:21.106Z,
"host" => "elk1"
}
參考網址
https://www.cnblogs.com/nulige/p/6680336.html
[root@elk1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch{ hosts=>["10.0.0.11:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
hello,bjgs
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-05-25 03:23:18.261 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-05-25 03:23:18.281 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-05-25 03:23:19.245 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-05-25 03:23:19.653 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
[INFO ] 2018-05-25 03:23:19.973 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-05-25 03:23:21.340 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-05-25 03:23:22.337 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.0.0.11:9200/]}}
[INFO ] 2018-05-25 03:23:22.340 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.0.0.11:9200/, :path=>"/"}
[WARN ] 2018-05-25 03:23:22.694 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://10.0.0.11:9200/"}
[INFO ] 2018-05-25 03:23:23.338 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-05-25 03:23:23.339 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-05-25 03:23:23.381 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-05-25 03:23:23.397 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-05-25 03:23:23.435 [[main]-pipeline-manager] elasticsearch - Installing elasticsearch template to _template/logstash
[INFO ] 2018-05-25 03:23:23.632 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.0.0.11:9200"]}
The stdin plugin is now waiting for input:
[INFO ] 2018-05-25 03:23:23.713 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4f6c9282@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
[INFO ] 2018-05-25 03:23:23.789 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
在es-head中查找索引信息
/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch{ hosts=>["10.0.0.11:9200"]} stdout{ codec => rubydebug} }'
參考官方文檔
https://www.elastic.co/guide/en/logstash/current/configuration.html
vim /etc/logstash/conf.d/logstash-simple.conf
input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
#啓動logstash
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-simple.conf
#查看logstash
ps -ef | grep logstash
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
discover_interval 間隔多少秒自動發現
sincedb_path 指定記錄文件的標記 flag-遊標
sincedb_write_interval 多久記錄一次遊標,默認15秒記一次
start_postion 收集方式,默認從尾部收集 通常設置爲["begining","end"]
index 指定索引
提示:logstash收集的第一行稱爲一個事件
配置文件
將多行收集到一個事件中,codec的使用,他裏面有一個多行的插件
通常的操做思路,輸出時,不要先輸出到es,很差排查是收集的問題仍是輸出的問題,
標準輸入,使用codec,再標準輸出,
錯誤示例爲:寫file.conf 測試,寫入到es,查看結果,再寫file.conf,寫入到es,再測。。。。
mutiline.conf 示例demo
使用codec插件,使用多行模式匹配正則pattern
input{
stdin{
codec => multiline{
pattern => "\["
negate => true
what => "previous"
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
測試啓動logstash 指定mutiline.conf配置文件,這裏將結果輸出到屏幕上
/usr/share/logstash/bin/logstash -f file.conf
[root@elk1 ~]# /usr/share/logstash/bin/logstash -f mutiline.conf OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[1]
[2]
[3]
fds
hewiqfq
[sasa]
查看輸出結果
{
"@timestamp" => 2018-05-28T01:40:53.017Z,
"@version" => "1",
"message" => "[1]",
"host" => "elk1"
}
[3]
{
"@timestamp" => 2018-05-28T01:41:04.957Z,
"@version" => "1",
"message" => "[2]",
"host" => "elk1"
}
fds
hewiqfq
[sasa]
{
"@timestamp" => 2018-05-28T01:41:19.495Z,
"@version" => "1",
"message" => "[3]\nfds\nhewiqfq",
"tags" => [
[0] "multiline"
],
"host" => "elk1"
}
提示:只有在 [ 符號出現後,才能把以前的數據打印到終端
提示:對php,java等程序日誌,使用正則表達式,分隔日誌,以達到程序員可根據上下文快速定位並解決問題的條件
如下爲標準nginx日誌格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
修改成json格式日誌,由於codec有一個json的插件,及之後使用正則表達式匹配(不推薦),緣由佔cpu,寫起來難,修改起來難,匹配起來難。
日誌標準化方向:json。線上日誌,訪問日誌,使用json,錯誤日誌使用正常模式。
官方文檔地址
http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
將如下代碼添加到nginx.conf配置文件中
參數網址
https://yq.aliyun.com/ziliao/29200
log_format logstash_json '{ "@timestamp": "$time_local", '
'"@fields": { '
'"remote_addr": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"request": "$request", '
'"request_method": "$request_method", '
'"http_referrer": "$http_referer", '
'"body_bytes_sent":"$body_bytes_sent", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"http_user_agent": "$http_user_agent" } }';
access_log /var/log/nginx/access_json.log logstash_json;
保存nginx.conf後
查看配置文件語法
nginx -t
重啓nginx 服務
systemctl restart nginx.service
查看端口
ss -ltnup | grep 80
經過瀏覽器訪問,或curl ,wget 等
查看json格式訪問日誌
tail -f /var/log/nginx/access_json.log
logstash的配置文件 nginx.conf
input{
file{
path => "/var/log/nginx/access_json.log"
codec => "json"
start_position => "beginning"
}
}
output{
stdout{
codec => "rubydebug"
}
}
使用如下命令啓動
/usr/share/logstash/bin/logstash -f nginx.conf
提示,沒有寫入到file.conf中,實際上能夠經過寫入file.conf,來追加到es中方便存儲
system.log日誌文件
input{
syslog{
type => "system-syslog"
host => "10.0.0.11"
port => "514"
}
}
output{
stdout{
codec => "rubydebug"
}
}
監聽514端口
啓動命令
/usr/share/logstash/bin/logstash -f system.conf
查看514端口,即logstash能夠監控tcp,udp的514端口
netstat -ltnup | grep -e "514"
tcp6 0 0 10.0.0.11:514 :::* LISTEN 6581/java
udp 0 0 10.0.0.11:514 0.0.0.0:* 6581/java
原日誌
vim /etc/rsyslog.conf
#*.* @@remote-host:514
修改後
grep "^*" /etc/rsyslog.conf | grep 514
*.* @@10.0.0.11:514
重啓日誌程序
systemctl restart rsyslog.service
若是此時登陸或退出系統,在另外一個終端會顯示全部系統日誌信息
以下所示
{
"severity" => 6,
"pid" => "7304",
"priority" => 86,
"facility" => 10,
"program" => "sshd",
"type" => "system-syslog",
"logsource" => "elk1",
"message" => "pam_unix(sshd:session): session opened for user root by (uid=0)\n",
"@timestamp" => 2018-05-28T07:46:57.000Z,
"severity_label" => "Informational",
"@version" => "1",
"timestamp" => "May 28 03:46:57",
"host" => "10.0.0.11",
"facility_label" => "security/authorization"
}
寫入到file.conf
input{
syslog {
type => "system-syslog"
host => "10.0.0.11"
port => "514"
}
}
output{
if [type] == "system-syslog" {
elasticsearch{
hosts => ["10.0.0.11:9200"]
index => "system-syslog-%{+YYYY.MM.dd}"
}
}
}
執行如下命令
/usr/share/logstash/bin/logstash -f file.conf
打開一個新的終端,反覆執行如下命令
logger "測試隨機數$RANDOM"
在瀏覽器中打開 http://10.0.0.11:5601 經過創建搜索索引
能夠查看出如下json串內容
在kibana中顯示的結果
{
"_index": "system-syslog-2018.05.28",
"_type": "doc",
"_id": "XjbDpWMBbiQ_aTCCRyqM",
"_score": 1,
"_source": {
"facility_label": "user-level",
"type": "system-syslog",
"logsource": "elk1",
"@timestamp": "2018-05-28T08:00:28.000Z",
"@version": "1",
"message": "測試隨機數19717\n",
"priority": 13,
"timestamp": "May 28 04:00:28",
"facility": 1,
"severity_label": "Notice",
"host": "10.0.0.11",
"program": "root",
"severity": 5
},
"fields": {
"@timestamp": [
"2018-05-28T08:00:28.000Z"
]
}
}
通常地,對於tcp插件的使用場景爲,在es索引丟失時,經過手工查找,定向到文件,再經過nc 等命令寫入到logstash中
vim tcp.conf
input{
tcp {
host => "10.0.0.11"
port => "6666"
}
}
output {
stdout {
codec => "rubydebug"
}
}
新開一個終端,將文件內容在logstash終端顯示
將/etc/hosts文件發送到logstash中
nc 10.0.0.11 6666 < /etc/hosts
將字符串發送到logstash終端
echo "無夢至勝" | nc 10.0.0.11 6666
提示:若是文件較大,傳輸時間很長,建議開啓screen命令
[FATAL] 2018-05-28 23:32:39.555 [LogStash::Runner] runner - Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[ERROR] 2018-05-28 23:32:39.580 [LogStash::Runner] Logstash - java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit
官網地址
https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
grok 插件
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
編輯配置文件
vim grok.conf
input{
stdin{
}
}
filter {
grok {
match => {
"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
logstash -f grok.conf
程序啓動後,在當前終端輸入
55.3.244.1 GET /index.html 15824 0.043
顯示結果爲
{
"@timestamp" => 2018-05-30T01:20:04.201Z,
"host" => "elk1",
"method" => "GET",
"request" => "/index.html",
"message" => "55.3.244.1 GET /index.html 15824 0.043",
"bytes" => "15824",
"client" => "55.3.244.1",
"duration" => "0.043",
"@version" => "1"
}
提示:過濾的這個規則
"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
在grok官網上有規則庫,能夠直接引用
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
在安裝logstash的目錄下也有上面的正則庫
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/
咱們對於常規的內容,咱們只須要引用其餘中的正則便可
此處須要大量引用正則表達式
網址
https://www.elastic.co/guide/en/kibana/current/targz.html
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz
mv kibana-6.2.4-linux-x86_64.tar.gz /usr/share/
cd /usr/share
tar -xzf kibana-6.2.4-linux-x86_64.tar.gz
ln -s kibana-6.2.4-linux-x86_64 kibana
cd kibana-6.2.4-linux-x86_64/
設置環境變量
vim /etc/profile
#set kibana path
export KIBANA=/usr/share/kibana/
[root@elk kibana]# source /etc/profile
[root@elk kibana]# echo $KIBANA
/usr/share/kibana/
ln -s /usr/share/kibana/bin/kibana /bin/kibana
vim /usr/share/kibana/config/kibana.yml
grep "^[^#]" /usr/share/kibana/config/kibana.yml
server.port: 5601
server.host: "elk1"
elasticsearch.url: "http://10.0.0.11:9200"
kibana.index: ".kibana"
提示:以上修改的地方分別是kibana的端口,主機,url,和kibana的索引
提示:通常地使用screen使kibana在後臺運行
yum install -y screen
提示:screen使用技巧
screen 回車,至關於進入了指定的終端
/usr/share/kibana/bin/kibana 啓動kibana
CRTL + A +D 退出當前screen 可是不退出screen中運行的進程
此時經過瀏覽器訪問 ,顯示以下圖所示內容
提示:通常地,在第二步中不使用時間過濾
控制面板->管理工做->事件查看器->windows日誌
收集和存儲win服務器關鍵系統日誌和程序日誌,應對安全審計要求
基於win帳戶登陸日誌進行安全事件分析
Elasticsearch官方提供的輕量級win事件日誌轉發代理
日誌數據流:winlogbeat à Logstash à Elasticsearch
採起這樣的部署方式是基於以下考慮:
1.windows 服務器默認部署後就不須要作任何變動了
2.若是對默認win日誌映射不滿意,能夠在logstash上從新調整
官方文檔
https://www.elastic.co/downloads/beats/winlogbeat
提示安裝步驟
1.下載並解壓
2.編輯winlogbeat.yml配置文件
3.運行powershell winlogbeat.exe -c winlogbeat.yml
4.查看運行狀況
設置日誌收集參數、轉發服務器參數等
配置文件官方參考網址
https://www.elastic.co/guide/en/beats/winlogbeat/index.html
收集三類日誌:Application/Security/System,72小時前的日誌忽略
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: Security
- name: System
將Elasticsearch output 下面的選項註釋
#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
修改Logstash output下的hosts選項
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.0.0.11:5044"]
如下爲修改的配置
其中 10.0.0.11 是Logstash服務的ip地址,實際部署時請參考正確的ip地址
請確保防火牆策略庫容許win服務器訪問到Logstash服務器的TCP 5044端口
啓動powershell
進入到安裝目錄,並執行 .\install-service-winlogbeat.ps1 文件
PS C:\Users\lenovo\Desktop\winlogbeat-6.2.4-windows-x86_64> .\install-service-winlogbeat.ps1
錯誤提示以下:
參考官方文檔
https://technet.microsoft.com/zh-CN/library/hh847748.aspx
錯誤提示:以管理員方式運行
提示錯誤:
在powershell中執行如下命令,輸入確認命令按Y
set-executionpolicy remotesigned
set-executionpolicy Bypass
檢查配置文件正確性
.\winlogbeat.exe test config -c .\winlogbeat.yml -e
啓動日誌收集代理服務
net start winlogbeat
查看當前winlogbeat狀態
tasklist | findstr winlogbeat
本地確認winlogbeat服務工做正常
程序自身日誌正常
運程logstash工做正常
kibana能搜索到win日誌
在沒有日誌產生的狀況下,winlogbeat進程工做內存9M,有日誌生產狀況下,小於100M內存
啓動服務
net start winlogbeat
中止服務
net stop winlogbeat
卸載服務
powershell "C:\Program Files\winlogbeat-6.2.4-windows-x86_64\uninstall-service-winlogbeat.ps1"
Beats數據採集-winlogbeat使用指南
Beats是elsatic公司的一款輕量級數據採集產品,它包含了幾個子產品:
packebeat(用於監控網絡流量)
filebeat(用於監控日誌數據,能夠替代logstash-input-file)
topbeat(用於蒐集進程的信息、負載、內存、磁盤等數據)
winlogbeat(用於蒐集windows事件日誌)
另外社區還提供了dockerbeat工具。因爲他們都是基於libbeat寫出來的,所以配置上基本相同,只是input輸入的地方各有差別。
elastic中Beats在win環境中基本使用的是PS腳本,所以用戶必須對PS有必定的瞭解。PS能夠理解爲win對命令行的高級封裝,加了個殼,支持高級用法。win7開始內置PS命令。XP等系統須要手動安裝
圖形化啓動略過
命令行啓動
輸入powershell便可
默認狀況下,系統會禁止運行腳本,返回如下錯誤
PS E:\packetbeat> .\install-service-packetbeat.ps1
沒法加載文件 E:\packetbeat\install-service-packetbeat.ps1,由於在此係統中禁止執
行腳本。有關詳細信息,請參閱 "get-help about_signing"。
所在位置 行:1 字符: 33
+ .\install-service-packetbeat.ps1 <<<<
+ CategoryInfo : NotSpecified: (:) [], PSSecurityException
+ FullyQualifiedErrorId : RuntimeException
須要修改參數執行下面的命令,開啓PS腳本功能
set-ExecutionPolicy RemoteSigned
Packetbeat屬於beats產品的一部分,專門負責網絡數據包分析,能夠:
下載
第一步:解壓
相比於linux,多了兩個PS腳本
第二步:以管理員身份運行腳本
進入指定目錄,運行註冊腳本
.\install-server-packetbeat.ps1
第三步:啓動服務
Set-Service packetbeat
對接elasticsearch
packetable配置以下:
codec {
plain {
# This setting must be a ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]
# Expected one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"], got ["utf-8"]
charset => "utf-8"
...
}
}
filebeat是本地數據的傳輸者。在你的服務上安裝代理,filebeat監聽日誌目錄或指定的日誌文件,追蹤文件,發送它們到ES或Logstash建立索引。
filebeat是如何工做的:當你啓動Filebeat,它開啓一個或多個監控進程,查看在本地上你指定的日誌文件。對於每個日誌文件,監控進程查明,Filebeat開始收集。每次收集僅讀取一個簡單日誌文件的新增內容而且發送新的日誌數據到libbeat,libbeat彙集事件而且發送彙集數據到輸出到你在Fiebeat中配置的地方。
對於更多關於監控進程和彙集,查看https://www.elastic.co/guide/en/beats/filebeat/current/how-filebeat-works.html
Filebeat 是一個Beat,而且它是基於libbeat框架,通常關於libbeat的信息和初始化ES,Logstash,Kibana 都包含在 Beats Platform Reference.
開始Filebeat以前,須要安裝和配置如下相關產品
查看 Getting Started with Beats and the Elastic Stack 瞭解更多信息
安裝好Elastic 堆棧,從提示中讀取學習如何安裝,配置和運行Filebeat
下載和安裝Filebeat,使用命令行的系統,僅介紹Centos
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-x86_64.rpm
sudo rpm -vi filebeat-6.2.4-x86_64.rpm
win:
PS > cd 'C:\Program Files\Filebeat'
PS > C:\Program files\Filebeat> .\install-service-filebeat.ps1
提示:若是腳本不能在系統上執行,你須要設置當前會話的執行策略容許腳本運行,
例子:powershell.exe -ExecutionPolicy UnRestriced -File .\install-service-fiilebeat.ps1
提示:filebeat模塊提供最快獲取日誌的體驗。查看Quick start for common log formats去學習如何獲取及使用模塊。若是你已經使用Filebeat模塊,你能夠跳過當前內容,包括到獲取的步驟和去 Quick start for common log formats頁面
配置Filebeat,編輯配置文件,對於rpm和deb格式,路徑爲/etc/filebeat/filebeat.yml.至於docker,它位於/usr/share/filebeat/filebeat.yml。對於mac和win,查看歸檔和解壓位置,它們一樣有一個叫作filebeat.referenc.yml做爲顯示不棄用的選項
查看配置文件格式部分在Beat 平臺參考配置文件的格式
這裏有一個簡單的fiebeat 部分的filebeat.yml文件,對於大多數配置文件的選項Filebeat使用默認預約義的值。
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
# - c:\programdata\elasticsearch\logs\*
配置Filebeat:
對於大多數基於Filebeat 配置,你可使用一個路徑定義一個的監聽進程。例如:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
這個例子中的監聽進程收集在/var/log/*.log下的全部日誌文件,這也就是說Filebeat將收集/var/log/這個目錄下全部以.log結尾的文件.全部匹配支持 Golang Glob支持的全部模式
爲了獲取全部一個預約義級別下的子目錄,使用下面的匹配模式
/var/log/*/*.log
這個獲取/var/log/下全部子目錄的.log文件,它不能獲取/var/log本身目錄下的日誌.固然它不能遞歸地獲取全部子目錄下的文件
output.elasticsearch:
hosts: ["192.168.1.42:9200"]
若是你發送輸出到Logstash,肯定你的Logstash配置輸出 Step 3: Configure Filebeat to use Logstash.
setup.kibana
host: "localhost:5601"
host 指運行kibana的機器ip和port,例如,localhost:5601
提示:若是指定路徑在端口號以後,你須要引入協議和端口:
output.elasticsearch:
hosts: ["myEShost:9200"]
username: "elastic"
password: "elastic"
setup.kibana:
host: "mykibanahost:5601"
username:"elastic"
password:"elastic:
kibana的username和password設置可選的,若是你不指定kibana的憑證,Filebeat使用ES 輸出的username和password
若是你正計劃 set up the Kibana dashboards,用戶必須是kibana_user built-in role 或者等價的權力
一樣查看憑證機關選項描述在 Set up the Kibana endpoint,和Configure the Elasticsearch output.
啓動Filebeat以前,你應當在配置文件中查看配置選項,對於更多信息查看 Configuring Filebeat
重點:使用Logstash 像 output同樣,你必須對Logstash安裝和配置Beats input插件
若是你想使用Logstash去執行額外的進程在數據收集經過Filebeat,你須要配置Fiebeat鏈接Logstash
要作到這一點,編輯Filebeat配置文件禁用ES輸出經過評論,而且開啓Logstash輸出部分
#---------------------------------------------------------- Logstash output ---------------------------------------------------------------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
tail_files:true #文件尾監控文件增量內容
hosts選項指定Logstash服務和端口(5044)而且Logstash 已被配置爲監聽Beats 的鏈接
對於以上配置,你必須 load the index template into Elasticsearch manually由於自動加載模板的選項只適用於彈性搜索輸出
在ES中, index templates被用做定義設置和映射肯定如何分析字段
推薦Filebeat索引模板文件經過被安裝的Filebeat packages。若是你接受在filebeat.yml配置文件定義配置,Filebeat在成功鏈接ES服務後自動加載模板,若是模板已存在,除非你配置了,不然不會覆蓋
你能夠禁止自動加載模板或者加載自定義的模板,經過Filebeat配置文件裏的配置模板加載項
你也能夠設定選項修改索引名稱和索引模板
注意:一個鏈接到ES服務的請求去加載索引模板,若是輸出是Logstash,你必須load the template manually.
瞭解更多信息,參見:
Load the template manually -對於Logstash請求輸出
默認地,Filebeat自動加載推薦的模板文件,field.yml,若是開啓了ES輸出,你能夠修改地filebeat.yml的默認值
setup.template.name: "your_template_name"
setup.template.fields: "path/to/fileds.yml"
若是模板已存在,它不會覆蓋除非你配置Filebeat這樣作
setup.template.overwrite: true
setup.template.enabled: false
若是你禁用了自動模板加載,你須要 load the template manually.
默認地,Filebeat以Filebeat-6.2.4-yyyy.MM.dd寫事件索引,'yyyy.MM.dd'是日期當事件是索引時。使用不一樣的名稱,你須要設置ES 輸出裏的索引選項。你指定的值應當包含 'root name' 加上日期信息。你也須要配置setup.template.name和setuup.template.pattern選項秋匹配新的名稱。例如:
output.elasticsearch.index: "customname-%{[beat.version]}-%{yyyy.MM.dd}
setup.template.name: "customname"
setup.template.pattern: "customname-*
setup.dashboard.index: "customname-*"
若是你計劃set up the Kibana dashboards,也設置這個選項覆蓋索引名在儀表盤和索引匹配
對整個列表的配置參見 Load the Elasticsearch index template
手動加載模板
手動地加載模板,運行和初始命令.鏈接到ES服務的請求。若是Logstash 輸出開啓,你須要使用 -E選項 暫時禁止Logstash輸出和開啓ES服務。下面的例子假設Logstash 輸出開啓,你能省略 -E 標記 若是ES 服務開啓了
若是你鏈接設置訪問憑證的ES 服務,確保你已配置憑證做爲描述 Step 2: Configure Filebeat.
若是主機運行Filebeat 沒有直接鏈接到ES服務,參見Load the template manually (alternate method).
加載模板,對你的系統使用適當的命令,此處僅顯示win 和Centos的
deb and rpm:
filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
win:
以管理員身份打印PS提示符,XP系統,自行下載安裝PS環境
從PS提示符中,切換到你安裝Filebeat的目錄,而且運行下面的代碼
PS > .\filebeat setup --template -E output.logstash.enabled=false.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
強制kibana 建立新文檔
若是你已使用Filebeat到ES 服務的索引數據,這個索引可能包含老的文檔。以前你加載模板,你能夠刪除老的文檔從filebeat-*去強制kibana建立新文檔,使用下面的命令
deb,rpm,and mac:
curl -XDELETE 'http://localhost:9200/filebeat-*'
win:
PS > Invoke-ResMethod -Method Delete http://localhost:9200/filebeat-*
上面的命令刪除全部匹配到filebeat-*的索引,運行這個命令以前,確保刪除的索引是匹配的
手動加載模板(代替方法)
若是主機運行Filebeat不能直接鏈接到ES服務器,你能夠導出模板到一個文件,移動它到已有鏈接的機器,而且手動安裝模板文件
1.導出模板文件:
deb and rpm:
filebeat export template > filebeat.template.json
win:
PS> .\filebeat.exe export template --es.version 6.2.4 | Out-File -Encoding UTF8 filebeat.template.json
deb,rpm,and mac:
curl -XPUT -H 'Content-Type:application/json' http://localhost:9200/_template/filebeat-6.2.4 -d@filebeat.tmplate.json
win:
PS > Invoke-RestMethod -Method Put -ContentType "application/json" -InFile filebeat.template.json -Uri http://localhost:9200/_template/filebeat-6.2.4
第五步:初始化Kibana 儀表盤
Filebeat 打包了Kibana的儀表盤,可視化和可視化搜索Filebeat。在使用儀表盤以前,須要建立匹配索引 "filebeat-*",而且加載儀表盤到Kibana。要作到這一點,能夠運行安裝程序命令 "setup" 或者configure dashboard loading 在配置文件 filebeat.yml中
這須要配置一個Kibana的端點,若是沒有配置Kibana,參見configured Filebeat
確保Kibana已運行後再開始這一步,若是你已鏈接了kibana的安全接口,確保你已配置了憑證在 Step 2: Configure Filebeat.
爲了Filebaet 設置Kibana的儀表盤,爲你的系統使用適當的命令
deb and rpm:
filebeat setup --dashboards
mac:
./filebeat setup --dashboards
docker:
docker run docker.elastic.co/beats/filebeat:6.2.4 setup --dashboard
win:
以管理員身份打開PS窗口,XP自行下載PS
進入到Filebeat的安裝目錄,而且運行
PS> filebeat setup --dashbords
第六步 開始 Filbeat:
在對應的系統平臺上使用相對應的命令開啓Filebeat。若是ES集羣使用了安全鏈接,請確保你已配置憑證參見 Step 2: Configure Filebeat.
注意:若是在 deb 或 rpm 系統平臺上使用 init.d 腳本 開啓Filebeat,你不能指定命令行標記(參見 Command reference),爲了指定在前臺開啓Filebeat
deb:
sudo service filebeat.sart
rpm:
sudo service filebeat.sart
docker:
docker run docker.elastic.co/beats/filebeat:6.2.4
mac:
sudo chown root filebeat.yml
sudo ./filebeat -e -c filebeat.yml -d "publish"
以root用戶運行FIlebeat,因此你須要修改配置文件的屬主,或者運行Filebeat 的 '--strict.perms=false' 參數指定。參見 Config File Ownership and Permissions 在Beat 平臺參考
win:
PS c:>Program Files\Filebeat> Start-Service filebeat
默認地,win 日誌文件存儲在 "c:\ProgramData\filebeat\Logs.
Filebeat 如今已經發送日到你定義的輸出中
第7步 查看Kibana 儀表盤示例
爲了更容易展現Filebeat數據,已建立了Filebeat儀表盤示例,運行 "setup"命令 更容易加載儀表盤
打開儀表盤,發射Kibana web 接口經過5601訪問,例如: localhost:5601
在發現頁面,確保預約義的 "filebeat-*" 索引匹配是能夠被選中
去儀表盤頁面而且選中你想打開的儀表盤頁面
當你使用 Filebeat modules 這些儀表盤被定義爲現成的工做。然而,你也可使用做爲示例,而且 customize它們去知足你的需求,即便你不使用Filebeat模塊
使用數據爲了填充儀表盤數據,你須要 define ingest node pipelines 或使用Logstash 去解析數據在字段預期中。若是你下在使用Logstash,參見 configuration examples 中的Logstash文檔幫助解析日誌格式支持儀表盤。
這是一個簡單示例Filebeat系統儀表盤
Filebeat提供一個預約義模塊,大約在5分鐘內能夠迅速實現部署日誌監控解決方案,完整的包含簡單儀表盤和數據可視化,這些模塊支持常見的日誌格式,像Nginx,Apache2,MySQL,可使用簡單的命令運行
這個提示顯示你如何運行基本模塊,是不須要額外的配置,對於文檔細節和所有列表變量模塊,參見:Modules.
若是你正在使用日誌文件類型是那種不支持Filebeat 模塊的,你須要設置而且手工修改Filebeat配置文件參見Getting Started With Filebeat.
運行Filebeat模塊以前,你須要:
sudo bin/elasticsearch-plugin install ingest-geoip
sudo bin/elasticsearch-plugin install ingest-user-agent
運行以上命令並重啓ES服務器
若是你使用Elastic Cloud 接口,你能夠從配置頁面開啓這兩個插件
設置和運行Filebeat模塊:
output.elasticsearch:
hosts: ["myEShost:9200"]
username: "elastic"
password: "elastic"
setup.kibana:
hosts: "myKibanahost:5601"
username: "elastic"
password: "elastic"
'username'和'password'是對kibana的可先項.若是你沒有指定Kibana的驗證,Filebeat使用ES的'username'和'password'指定
若是你正在規劃 set up the Kibana dashboards,用戶必須有kibana_user built-in role 或等價權限
./filebeat setup -e
./filebeat -e --modules system
這個命令關注Filebeat命令行,加載提取管道節點和解析其餘日誌配置文件的設置
運行多個模塊,能夠指定逗號分隔模塊列表。以下:
./filebeat -e modules system,nginx,mysql
當你啓動Filebeat,你應當查看信息指示- Filebeat 已啓動的全部可用模塊。以下:
2017/08/16 23:39:15.414375 harvester.go:206: INFO Harvester started for file: /var/log/displaypolicyd.stdout.log
若是沒能查看到每一個日誌文件的信息,必須可讀,參見Set the path variable 找到如何設置文件路徑
注意:
依賴已安裝的Filebeat,當你嘗試運行Filebeat 模塊可能會見到錯誤相關的文件或權限問題。參見 Config File Ownership and Permissions 在Beats 平臺參考 若是你出現了文件屬主或權限
當運行Filebeat,在命令行依照規則使用--modules 標記開啓模塊。在生產環境中,你可能想使用modules.d目錄替代。參見Specify which modules to run 瞭解更多信息
這個例子假設系統
當前位置
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules-quickstart.html