連接:https://pan.baidu.com/s/1V2aYpB86ZzxL21Hf-AF1rA
提取碼:7izv
複製這段內容後打開百度網盤手機App,操做更方便哦html
Elasticsearch MySQL
Index Database
Type Table
Document Row
Field Columnjava
- Node:運行單個ES實例的服務器
- Cluster:一個或多個節點構成集羣
- Index:索引是多個文檔的集合(必須是小寫字母)
- Document:Index裏每條記錄稱爲Document,若干文檔構建一個Index
- Type:一個Index能夠定義一種或多種類型,將Document邏輯分組
- Field:ES存儲的最小單元
- Shards:ES將Index分爲若干份,每一份就是一個分片。
- Replicas:Index的一份或多份副本
主機名 | 主機IP | 用途 |
---|---|---|
ES1 | 192.168.200.16 | elasticsearch-node1 |
ES2 | 192.168.200.17 | elasticsearch-node2 |
ES3 | 192.168.200.18 | elasticsearch-node3 |
Logstash-Kibana | 192.168.200.19 | 日誌可視化服務器 |
#安裝環境 [root@ES1 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root@ES1 ~]# uname -r 3.10.0-957.12.1.el7.x86_64 [root@ES1 ~]# systemctl stop firewalld [root@ES1 ~]# setenforce 0 setenforce: SELinux is disabled
#更換亞洲時區 [root@ES1 ~]# /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime #安裝時間同步 [root@ES1 ~]# yum -y install ntpdate [root@ES1 ~]# which ntpdate /usr/sbin/ntpdate #進行時間同步 [root@ES1 ~]# ntpdate ntp1.aliyun.com 27 Aug 22:29:56 ntpdate[7009]: adjust time server 120.25.115.20 offset 0.028693 sec
在三臺ES上都進行以下操做node
#yum安裝jdk1.8 [root@ES1 ~]# yum -y install java-1.8.0-openjdk #導入yum方式安裝ES的公鑰 [root@ES1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@ES1 ~]# vim /etc/yum.repos.d/elastic.repo [root@ES1 ~]# cat /etc/yum.repos.d/elastic.repo [elastic-6.x] name=Elastic repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
#安裝elasticsearch [root@ES1 ~]# yum -y install elasticsearch #配置elasticsearch的配置文件 [root@ES1 ~]# cd /etc/elasticsearch/ [root@ES1 elasticsearch]# cp -a elasticsearch.yml elasticsearch.yml_bak #修改前elasticsearch配置文件 [root@ES1 elasticsearch]# cat -n /etc/elasticsearch/elasticsearch.yml_bak | sed -n '17p;23p;33p;37p;55p;59p;68p;72p' 17 #cluster.name: my-application 23 #node.name: node-1 33 path.data: /var/lib/elasticsearch 37 path.logs: /var/log/elasticsearch 55 #network.host: 192.168.0.1 59 #http.port: 9200 68 #discovery.zen.ping.unicast.hosts: ["host1", "host2"] 72 #discovery.zen.minimum_master_nodes: #修改後elasticsearch配置文件 [root@ES1 elasticsearch]# cat -n /etc/elasticsearch/elasticsearch.yml | sed -n '17p;23p;33p;37p;55p;59p;68p;72p' 17 cluster.name: elk-cluster 23 node.name: node-1 33 path.data: /var/lib/elasticsearch 37 path.logs: /var/log/elasticsearch 55 network.host: 192.168.200.16 59 http.port: 9200 68 discovery.zen.ping.unicast.hosts: ["192.168.200.16", "192.168.200.17","192.168.200.18"] 72 discovery.zen.minimum_master_nodes: 2
#將ES1配置文件拷貝到ES2和ES3 [root@ES1 elasticsearch]# scp /etc/elasticsearch/elasticsearch.yml 192.168.200.17:/etc/elasticsearch/ root@192.168.200.17's password: elasticsearch.yml 100% 2899 2.0MB/s 00:00 [root@ES1 elasticsearch]# scp /etc/elasticsearch/elasticsearch.yml 192.168.200.18:/etc/elasticsearch/ root@192.168.200.18's password: elasticsearch.yml 100% 2899 2.1MB/s 00:00 #只須要修改ES2和ES3的節點名稱和監聽端口便可 [root@ES2 elasticsearch]# cat -n /etc/elasticsearch/elasticsearch.yml | sed -n '17p;23p;33p;37p;55p;59p;68p;72p' 17 cluster.name: elk-cluster 23 node.name: node-2 33 path.data: /var/lib/elasticsearch 37 path.logs: /var/log/elasticsearch 55 network.host: 192.168.200.17 59 http.port: 9200 68 discovery.zen.ping.unicast.hosts: ["192.168.200.16", "192.168.200.17","192.168.200.18"] 72 discovery.zen.minimum_master_nodes: 2 [root@ES3 elasticsearch]# cat -n /etc/elasticsearch/elasticsearch.yml | sed -n '17p;23p;33p;37p;55p;59p;68p;72p' 17 cluster.name: elk-cluster 23 node.name: node-3 33 path.data: /var/lib/elasticsearch 37 path.logs: /var/log/elasticsearch 55 network.host: 192.168.200.18 59 http.port: 9200 68 discovery.zen.ping.unicast.hosts: ["192.168.200.16", "192.168.200.17","192.168.200.18"] 72 discovery.zen.minimum_master_nodes: 2
[root@ES1 elasticsearch]# systemctl start elasticsearch [root@ES2 elasticsearch]# systemctl start elasticsearch [root@ES3 elasticsearch]# systemctl start elasticsearch
[root@ES1 elasticsearch]# curl -X GET "192.168.200.16:9200/_cat/health?v" epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1567046042 02:34:02 elk-cluster green 3 3 0 0 0 0 0 0 - 100.0%
RestFul API格式:curl -X<verb> '<protocol>://<host>:<port>/<path>?<query_string>' -d '<body>'
python
參數 | 描述 |
---|---|
verb | HTTP方法,好比GET,POST,PUT,HEAD,DELETE |
host | ES集羣中的任意節點主機名 |
port | ES HTTP服務端口,默認9200 |
path | 索引路徑 |
query_string | 可選的查詢請求參數。例如?pretty參數將格式化輸出JSON數據 |
-d | 裏面放一個GET的JSON格式請求主體 |
body | 本身寫的JSON格式的請求主體 |
#列出數據庫全部的索引 [root@ES1 ~]# curl -X GET "192.168.200.16:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size #建立一個索引 [root@ES1 ~]# curl -X PUT "192.168.200.16:9200/logs-test-2019.08.29" {"acknowledged":true,"shards_acknowledged":true,"index":"logs-test-2019.08.29"} #查看數據庫全部索引 [root@ES1 ~]# curl -X GET "192.168.200.16:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open logs-test-2019.08.29 Yua-9GCmROOmCgqotJ_31w 5 1 0 0 2.2kb 1.1kb
[root@ES1 ~]# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz [root@ES1 ~]# ll -d node-v4.4.7-linux-x64.tar.gz -rw-r--r-- 1 root root 12189839 6月 29 2016 node-v4.4.7-linux-x64.tar.gz [root@ES1 ~]# tar xf node-v4.4.7-linux-x64.tar.gz -C /usr/local/ [root@ES1 ~]# mv /usr/local/node-v4.4.7-linux-x64/ /usr/local/node-v4.4 [root@ES1 ~]# echo -e 'NODE_HOME=/usr/local/node-v4.4\nPATH=$NODE_HOME/bin:$PATH\nexport NODE_HOME PATH' >> /etc/profile [root@ES1 ~]# tail -3 /etc/profile NODE_HOME=/usr/local/node-v4.4 PATH=$NODE_HOME/bin:$PATH export NODE_HOME PATH [root@ES1 ~]# source /etc/profile
#yum安裝git [root@ES1 ~]# yum -y install git #切換國內源 [root@ES1 ~]# npm config set registry http://registry.npm.taobao.org
#git拉取elasticsearch-head代碼 [root@ES1 ~]# git clone git://github.com/mobz/elasticsearch-head.git 正克隆到 'elasticsearch-head'... remote: Enumerating objects: 73, done. remote: Counting objects: 100% (73/73), done. remote: Compressing objects: 100% (53/53), done. remote: Total 4333 (delta 36), reused 46 (delta 17), pack-reused 4260 接收對象中: 100% (4333/4333), 2.51 MiB | 29.00 KiB/s, done. 處理 delta 中: 100% (2409/2409), done. [root@ES1 ~]# cd elasticsearch-head/ [root@ES1 elasticsearch-head]# npm install #如下省略若干。。。 #特別提示:此安裝過程報錯也不要緊,不影響使用
npm install命令詳解(https://blog.csdn.net/csdn_yudong/article/details/83721870)linux
#修改源碼包配置文件Gruntfile.js(在99行處下邊增長一行代碼以下) [root@ES1 elasticsearch-head]# cat -n Gruntfile.js | sed -n '94,101p' 94 connect: { 95 server: { 96 options: { 97 port: 9100, 98 base: '.', 99 keepalive: true, #添加一個逗號 100 hostname: '*' #增長本行代碼 101 }
[root@ES1 elasticsearch-head]# npm run start > elasticsearch-head@0.0.0 start /root/elasticsearch-head > grunt server Running "connect:server" (connect) task Waiting forever... Started connect web server on http://localhost:9100
雖然瀏覽器上咱們打開了,可是咱們發現插件沒法鏈接elasticsearch的API,這是由於ES5.0+版本之後,要想鏈接API必須先要進行受權才行。git
[root@ES1 elasticsearch-head]# echo -e 'http.cors.enabled: true\nhttp.cors.allow-origin: "*"' >> /etc/elasticsearch/elasticsearch.yml [root@ES1 elasticsearch-head]# tail -2 /etc/elasticsearch/elasticsearch.yml http.cors.enabled: true http.cors.allow-origin: "*" #重啓動elasticsearch [root@ES1 elasticsearch-head]# systemctl restart elasticsearch
#yum安裝jdk1.8 [root@Logstash-Kibana ~]# yum -y install java-1.8.0-openjdk [root@Logstash-Kibana ~]# vim /etc/yum.repos.d/elastic.repo [root@Logstash-Kibana ~]# cat /etc/yum.repos.d/elastic.repo [elastic-6.x] name=Elastic repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@Logstash-Kibana ~]# yum -y install logstash
- 比較操做符:
(1)相等:==,!=,<,>,<=,>=
(2)正則:=~(正則匹配),!~(不匹配正則)
(3)包含:in(包含),not in(不包含)- 布爾操做符:
(1)and(與)
(2)or(或)
(3)nand(非與)
1)xor(非或)- 一元運算符:
(1)!:取反
(2)():複合表達式
(3)!():對複合表達式取反
#(1)stdin示例 input { stdin{ #標準輸入(用戶交互輸入數據) } } filter { #條件過濾(抓取字段信息) } output { stdout { codec => rubydebug #輸出調試(調試配置文件語法用) } } #(2)File示例 input { file { path => "/var/log/messages" #讀取的文件路徑 tags => "123" #標籤 type => "syslog" #類型 } } filter { #條件過濾(抓取字段信息) } output { stdout { codec => rubydebug #輸出調試(調試配置文件語法用) } } #(3)TCP示例 input { tcp { port => 12345 type => "nc" } } filter { #條件過濾(抓取字段信息) } output { stdout { codec => rubydebug #輸出調試(調試配置文件語法用) } } #(4)Beats示例 input { beats { #後便會專門講,此處不演示 port => 5044 } } filter { #條件過濾(抓取字段信息) } output { stdout { codec => rubydebug #輸出調試(調試配置文件語法用) } }
(1)input ==> stdin{}標準輸入插件測試github
#建立logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin{ } } filter { } output { stdout { codec => rubydebug } }
#測試logstash配置文件是否正確 [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf -t OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [INFO ] 2019-09-04 14:58:15.396 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"} [INFO ] 2019-09-04 14:58:15.435 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"} [WARN ] 2019-09-04 14:58:16.016 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified Configuration OK #配置文件正確 [INFO ] 2019-09-04 14:58:22.750 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
#啓動Logstash進行測試 [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #以上省略若干。。。 yangwenbo #這就是用戶輸入的數據 /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated { "message" => "yangwenbo", "@version" => "1", "host" => "Logstash-Kibana", "@timestamp" => 2019-09-04T07:03:18.814Z } 12345 #這就是用戶輸入的數據 { "message" => "12345", "@version" => "1", "host" => "Logstash-Kibana", "@timestamp" => 2019-09-04T07:03:28.797Z }
特別提示:
讓用戶直接輸入數據的方式就是標準輸入stdin{};
將輸入的數據存儲到message之後直接輸出到屏幕上進行調試就是標準輸出stdout{codec => rubydebug}web
(2)input ==> file{}讀取文件數據數據庫
#修改Logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { file { path => "/var/log/messages" tags => "123" type => "syslog" } } filter { } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#再開一個窗口向日志文件輸入一句話 [root@Logstash-Kibana ~]# echo "yunwei" >> /var/log/messages
#回頭再去查看logstash的debug輸出 { "type" => "syslog", "@version" => "1", "path" => "/var/log/messages", "tags" => [ [0] "123" ], "@timestamp" => 2019-09-04T07:26:29.726Z, "host" => "Logstash-Kibana", "message" => "yunwei" }
(3)input ==> tcp{}經過監聽tcp端口接收日誌npm
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { tcp { port => 12345 type => "nc" } } filter { } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#再開一個窗口,查看12345端口監聽狀況 [root@Logstash-Kibana ~]# netstat -antup | grep 12345 tcp6 0 0 :::12345 :::* LISTEN 8538/java
#在ES1上安裝nc向12345端口傳輸數據 [root@ES1 ~]# yum -y install nc [root@ES1 ~]# echo "welcome to yangwenbo" | nc 192.168.200.19 12345
#回頭再去查看logstash的debug輸出 { "host" => "192.168.200.16", "port" => 37944, "@timestamp" => 2019-09-04T09:41:11.396Z, "type" => "nc", "@version" => "1", "message" => "welcome to yangwenbo" }
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
#Json/Json_lines示例 input { stdin { codec => json { #將json格式的數據轉碼成UTF-8格式後進行輸入 charset => ["UTF-8"] } } } filter { } output { stdout { codec => rubydebug } }
codec => json {}將json格式數據進行編碼轉換
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { codec => json { charset => ["UTF-8"] } } } filter { } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#再開一個窗口進入python交互界面生成json格式數據 [root@Logstash-Kibana ~]# python Python 2.7.5 (default, Apr 9 2019, 14:30:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import json >>> data = [{'a':1,'b':2,'c':3,'d':4,'e':5}] >>> json = json.dumps(data) >>> print json [{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}] #這就是json格式數據
#將json格式數據,輸入後,查看logstash數據的輸出結果 { "d" => 4, "e" => 5, "host" => "Logstash-Kibana", "a" => 1, "c" => 3, "b" => 2, "@version" => "1", "@timestamp" => 2019-09-04T11:29:12.044Z }
#Json示例 input { stdin { } } filter { json { source => "message" #將保存在message中的json數據進行結構化解析 target => "content" #解析後的結果保存在content裏 } } output { stdout { codec => rubydebug } } #Kv示例 filter { kv { field_split => "&?" #將輸入的數據按&字符進行切割解析 } }
(1)filter => json {}將json的編碼進行結構化解析過濾
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#交互式輸入json格式數據:{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4} { "message" => "{\"a\": 1, \"c\": 3, \"b\": 2, \"e\": 5, \"d\": 4}", #數據都保存在了message字段裏 "host" => "Logstash-Kibana", "@version" => "1", "@timestamp" => 2019-09-05T06:15:17.723Z }
#再次修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { json { source => "message" target => "content" } } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#交互式輸入如下內容進行解析:{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4} { "@timestamp" => 2019-09-05T06:24:59.352Z, "content" => { #json被結構化解析出來了 "d" => 4, "e" => 5, "a" => 1, "c" => 3, "b" => 2 }, "host" => "Logstash-Kibana", "@version" => "1", "message" => "{\"a\": 1, \"c\": 3, \"b\": 2, \"e\": 5, \"d\": 4}" }
(2)filter => kv {}將輸入的數據按照制定符號切割
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { kv { field_split => "&?" } } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#交互式輸入如下數據,而後查看解析結果:name=yangwenbo&yunjisuan=benet&yunwei=666 { "@timestamp" => 2019-09-05T06:32:01.093Z, "yunwei" => "666", "name" => "yangwenbo", "message" => "name=yangwenbo&yunjisuan=benet&yunwei=666", "yunjisuan" => "benet", "@version" => "1", "host" => "Logstash-Kibana" }
#日誌輸入示例: 223.72.85.86 GET /index.html 15824 200 #grok自定義正則的數據抓取示例 input { stdin { } } filter { grok { match => { "message" => '(?<client>[0-9.]+)[ ]+(?<method>[A-Z]+)[ ]+(?<request>[a-zA-Z/.]+)[ ]+(?<bytes>[0-9]+)[ ]+(?<num>[0-9]+)' } } } output { stdout { codec => rubydebug } }
操做演示
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { grok { match => { "message" => '(?<client>[0-9.]+)[ ]+(?<method>[A-Z]+)[ ]+(?<request>[a-zA-Z/.]+)[ ]+(?<bytes>[0-9]+)[ ]+(?<num>[0-9]+)' } } } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#輸入日誌進行數據抓取測試:223.72.85.86 GET /index.html 15824 200 { "@version" => "1", "request" => "/index.html", "message" => "223.72.85.86 GET /index.html 15824 200", "method" => "GET", "bytes" => "15824", "host" => "Logstash-Kibana", "num" => "200", "@timestamp" => 2019-09-05T06:41:49.878Z, "client" => "223.72.85.86" }
爲了方便用戶抓取數據方便,官方自定義了一些內置正則的默認抓取方式
Grok默認的內置正則模式,官方網頁示例
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
#logstash默認掛載的經常使用的內置正則庫文件 [root@Logstash-Kibana ~]# rpm -ql logstash | grep grok-patterns /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns [root@Logstash-Kibana ~]# cat /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns #如下省略無數條。。。
操做演示
#日誌輸入示例:223.72.85.86 GET /index.html 15824 200
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { grok { match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num}" } } } output { stdout { codec => rubydebug } }
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#輸入日誌進行數據抓取測試:223.72.85.86 GET /index.html 15824 200 { "client" => "223.72.85.86", "@timestamp" => 2019-09-05T06:55:31.459Z, "num" => "200", "host" => "Logstash-Kibana", "request" => "/index.html", "message" => "223.72.85.86 GET /index.html 15824 200", "@version" => "1", "method" => "GET", "bytes" => "15824" }
#日誌輸入示例(新增一個數據):223.72.85.86 GET /index.html 15824 200 "welcome to yangwenbo"
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { grok { patterns_dir => "/opt/patterns" #自定義的內置正則抓取模板路徑 match => { "message" => '%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"' } } } output { stdout { codec => rubydebug } }
#建立自定義內置正則的掛載模板文件 [root@Logstash-Kibana ~]# vim /opt/patterns [root@Logstash-Kibana ~]# cat /opt/patterns STRING .*
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#輸入日誌示例,查看數據抓取結果:223.72.85.86 GET /index.html 15824 200 "welcome to yangwenbo" { "request" => "/index.html", "message" => "223.72.85.86 GET /index.html 15824 200 \"welcome to yangwenbo\"", "@timestamp" => 2019-09-05T07:25:23.949Z, "method" => "GET", "bytes" => "15824", "content" => "welcome to yangwenbo", "host" => "Logstash-Kibana", "num" => "200", "client" => "223.72.85.86", "@version" => "1" }
有的時候,咱們可能須要抓取多種日誌格式的數據
所以,咱們須要配置grok的多模式匹配的數據抓取
#日誌輸入示例: 223.72.85.86 GET /index.html 15824 200 "welcome to yangwenbo" 223.72.85.86 GET /index.html 15824 200 《Mr.yang-2019-09-05》
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { grok { patterns_dir => "/opt/patterns" match => [ "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"', "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} 《%{NAME:name}》' ] } } output { stdout { codec => rubydebug } }
#增長一個自定義的內置正則抓取變量 [root@Logstash-Kibana ~]# vim /opt/patterns [root@Logstash-Kibana ~]# cat /opt/patterns STRING .* NAME .*
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#輸入日誌示例,查看數據抓取結果 { "bytes" => "15824", "@timestamp" => 2019-09-05T07:51:29.505Z, "@version" => "1", "content" => "welcome to yangwenbo", "num" => "200", "message" => "223.72.85.86 GET /index.html 15824 200 \"welcome to yangwenbo\"", "host" => "Logstash-Kibana", "client" => "223.72.85.86", "request" => "/index.html", "method" => "GET" } ----------------------------------------------------------------------------- { "bytes" => "15824", "@timestamp" => 2019-09-05T07:51:38.083Z, "@version" => "1", "num" => "200", "message" => "223.72.85.86 GET /index.html 15824 200 《Mr.yang-2019-09-05》", "name" => "Mr.yang-2019-09-05", "host" => "Logstash-Kibana", "client" => "223.72.85.86", "request" => "/index.html", "method" => "GET" }
geoip插件能夠對IP的來源進行分析,並經過Kibana的地圖功能形象的顯示出來。
#日誌輸入示例 223.72.85.86 GET /index.html 15824 200 "welcome to yangwenbo" 119.147.146.189 GET /index.html 15824 200 《Mr.yang-2019-09-05》
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { stdin { } } filter { grok { patterns_dir => "/opt/patterns" match => [ "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"', "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} 《%{NAME:name}》' ] } geoip { source => "client" database => "/opt/GeoLite2-City.mmdb" } } output { stdout { codec => rubydebug } }
#下載geoip插件包 [root@Logstash-Kibana ~]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz [root@Logstash-Kibana ~]# ll -d GeoLite2-City.tar.gz -rw-r--r-- 1 root root 30044666 9月 4 19:40 GeoLite2-City.tar.gz #解壓安裝geoip插件包 [root@Logstash-Kibana ~]# tar xf GeoLite2-City.tar.gz [root@Logstash-Kibana ~]# cd GeoLite2-City_20190903/ [root@Logstash-Kibana ~]# cp GeoLite2-City_20190903/GeoLite2-City.mmdb /opt/
#啓動Logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf #如下省略若干。。。
#輸入日誌示例,查看數據抓取結果 223.72.85.86 GET /index.html 15824 200 "welcome to yangwenbo" { "@timestamp" => 2019-09-05T08:29:35.399Z, "content" => "welcome to yangwenbo", "geoip" => { "region_code" => "BJ", "country_code3" => "CN", #IP所在國家 "timezone" => "Asia/Shanghai", "country_code2" => "CN", "ip" => "223.72.85.86", "continent_code" => "AS", "location" => { "lon" => 116.3889, #IP所在地圖經度 "lat" => 39.9288 #IP所在地圖緯度 }, "latitude" => 39.9288, "country_name" => "China", "region_name" => "Beijing", "city_name" => "Beijing", #IP所在城市 "longitude" => 116.3889 }, "message" => "223.72.85.86 GET /index.html 15824 200 \"welcome to yangwenbo\"", "request" => "/index.html", "bytes" => "15824", "num" => "200", "@version" => "1", "host" => "Logstash-Kibana", "client" => "223.72.85.86", "method" => "GET" } ----------------------------------------------------------------------------- 119.147.146.189 GET /index.html 15824 200 《Mr.yang-2019-09-05》 { "@timestamp" => 2019-09-05T08:33:42.454Z, "name" => "Mr.yang-2019-09-05", "geoip" => { "region_code" => "GD", "country_code3" => "CN", "timezone" => "Asia/Shanghai", "country_code2" => "CN", "ip" => "119.147.146.189", "continent_code" => "AS", "location" => { "lon" => 113.25, "lat" => 23.1167 }, "latitude" => 23.1167, "country_name" => "China", "region_name" => "Guangdong", "longitude" => 113.25 }, "message" => "119.147.146.189 GET /index.html 15824 200 《Mr.yang-2019-09-05》", "request" => "/index.html", "bytes" => "15824", "num" => "200", "@version" => "1", "host" => "Logstash-Kibana", "client" => "119.147.146.189", "method" => "GET" }
#ES示例 output { elasticsearch { hosts => "localhost:9200" #將數據寫入elasticsearch index => "logstash-mr_chen-admin-%{+YYYY.MM.dd}" #索引爲xxx } }
主機名 | 主機IP | 用途 |
---|---|---|
ES1 | 192.168.200.16 | elasticsearch-node1 |
ES2 | 192.168.200.17 | elasticsearch-node2 |
ES3 | 192.168.200.18 | elasticsearch-node3 |
Logstash-Kibana | 192.168.200.19 | 日誌可視化服務器 |
#利用yum源安裝kibana [root@Logstash-Kibana ~]# yum -y install kibana
#修改logstash配置文件 [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/messages"] type => "system" #對數據添加類型 tags => ["syslog","test"] #對數據添加標識 start_position => "beginning" } file { path => ["/var/log/audit/audit.log"] type => "system" #對數據添加類型 tags => ["auth","test"] #對數據添加標識 start_position => "beginning" } } filter { } output { if [type] == "system" { if [tags][0] == "syslog" { #經過判斷能夠將不一樣日誌寫到不一樣的索引裏 elasticsearch { hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"] index => "logstash-mr_yang-syslog-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } } else if [tags][0] == "auth" { elasticsearch { hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"] index => "logstash-mr_yang-auth-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } } } }
#修改kibana的配置文件 #修改前 [root@Logstash-Kibana ~]# cat -n /etc/kibana/kibana.yml_bak | sed -n '7p;28p' 7 #server.host: "localhost" 28 #elasticsearch.hosts: ["http://localhost:9200"] #修改後 [root@Logstash-Kibana ~]# cat -n /etc/kibana/kibana.yml | sed -n '7p;28p' 7 server.host: "0.0.0.0" 28 elasticsearch.hosts: ["http://192.168.200.16:9200"] #就寫一個ES主節點便可
#啓動kibana進程 [root@Logstash-Kibana ~]# systemctl start kibana #啓動logstash [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
特別提示: 若是elasticsearch裏沒有任何索引,那麼kibana是都取不到的 ,因此啓動logstash先elasticsearch裏寫點數據就行了
經過瀏覽器訪問kibana http://192.168.200.19:5601
依次建立兩個索引
建立兩個索引後,以下圖所示
直接演示簡單講解kibana的數據檢索功能