TextField
創建索引 可是能夠不存儲TermQuery
不會將查詢分詞了,把查詢條件當成固定的詞條BooleanClause
+ 表明 must - 表明 mustnot 表明 should html
開始看filebeat,被官網帶到了Getting started with the Elastic Stack.
node
這個小教程是用metricbeat來採集服務器指標數據,而後用kibana作展現。linux
port:9200
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-linux-x86_64.tar.gz
tar -xzvf elasticsearch-7.0.1-linux-x86_64.tar.gz
cd elasticsearch-7.0.1
./bin/elasticsearch
curl http://127.0.0.1:9200
複製代碼
port:5601
kibana是專門用於ES的,對數據進行搜索以及可視化。小教程裏建議kibana和ES裝同一臺機器上。git
配置文件裏須要配置ES集羣的地址 web
curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.0.1-linux-x86_64.tar.gz
tar xzvf kibana-7.0.1-linux-x86_64.tar.gz
cd kibana-7.0.1-linux-x86_64/
./bin/kibana
複製代碼
beat是作採集用的,裝在服務器上的agent。通常輸出到ES
和logstash
,本身自己不能作解析redis
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.0.1-linux-x86_64.tar.gz
tar xzvf metricbeat-7.0.1-linux-x86_64.tar.gz
複製代碼
./metricbeat modules enable system
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
複製代碼
./metricbeat setup -e
,-e
是將輸出打到stderr
不是syslog
,就是把日誌打到控制檯能看得見。./metricbeat -e
若是beat採的數須要額外處理那麼須要進logstash(其實就是解析) json
curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.0.1.tar.gz
tar -xzvf logstash-7.0.1.tar.gz
複製代碼
demo-metrics-pipeline.conf
,監聽5044端口input {
beats {
port => 5044
}
}
# The filter part of this file is commented out to indicate that it
# is optional.
# filter {
#
# }
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
複製代碼
./bin/logstash -f demo-metrics-pipeline.conf
還得配置beat 讓其吐出到logstash 安全
metricbeat採集了cmdline
完整參數 太長了,解析一下它。用grok
ruby
filter {
if [system][process] {
if [system][process][cmdline] {
grok {
match => {
"[system][process][cmdline]" => "^%{PATH:[system][process][cmdline_path]}"
}
remove_field => "[system][process][cmdline]"
}
}
}
}
複製代碼
解析這塊。grok後面會再寫。 bash
/var/log/*.log
filebeat.reference.yml
參考全部配置項filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
複製代碼
/var/log/*/*.log
從/var/log
子文件夾中獲取全部.log,如今還不支持獲取全部層中的全部文件output.elasticsearch:
hosts: ["myEShost:9200"]
複製代碼
setup.kibana:
host: "mykibanahost:5601"
複製代碼
output.elasticsearch:
hosts: ["myEShost:9200"]
username: "filebeat_internal"
password: "YOUR_PASSWORD"
setup.kibana:
host: "mykibanahost:5601"
username: "my_kibana_user"
password: "YOUR_PASSWORD"
複製代碼
./filebeat modules list
File is inactive: /var/log/boot.log. Closing because close_inactive of 5m0s reached. 說明文件沒有新東西
data/registry/filebeat/data.json
#這是filebeat.yml 輸出到5044端口 默認logstash監聽的端口
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
複製代碼
#==================== Elasticsearch template setting ==========================
#默認一個分片 這就是Filebeat在ES中索引只有一個分片緣由
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
複製代碼
用rpm和tar安裝目錄以及日誌的位置不同具體查官方文檔。
beat採集日誌有的須要用root採
理解這些概念會在配置的時候作出明智的選擇
inputs
harvesters
輸入和收割機close_inactive
因爲有人幫咱們採集好了日誌,在kafka中,因此先用logstash對接
# 可以從標準輸入中拿數懟到標準輸出 -e 是能夠直接跟配置 快速測試
cd logstash-7.0.1
bin/logstash -e 'input { stdin { } } output { stdout {} }'
# 結果以下
什麼鬼
{
"message" => "什麼鬼",
"@version" => "1",
"@timestamp" => 2019-05-07T02:00:39.581Z,
"host" => "node1"
}
複製代碼
# logstash管道配置以下 先打印到標準輸出上查看
input {
beats {
port => 5044
}
}
# rubydebug 這是用ruby的一個打印庫 讓輸出更好看
output {
stdout { codec => rubydebug }
}
複製代碼
bin/logstash -f first-pipeline.conf --config.test_and_exit
這個命令能夠看配置文件是否是好使bin/logstash -f first-pipeline.conf --config.reload.automatic
config.reload.automatic
這個選項能自動加載新的配置文件,不用重啓logstashgeoip插件
* logstash支持多輸入多輸出,能夠直接對接twitter 就無法實驗了。能夠直接輸出到文件。
geoip {
source => "clientip"
}
{
"ecs" => {
"version" => "1.0.0"
},
"input" => {
"type" => "log"
},
"agent" => {
"ephemeral_id" => "860d92a1-9fdb-4b41-8898-75021e3edaaf",
"version" => "7.0.0",
"hostname" => "node1",
"id" => "c389aa98-534d-4f37-ba62-189148baa6a3",
"type" => "filebeat"
},
"request" => "/robots.txt",
"verb" => "GET",
"host" => {
"hostname" => "node1",
"containerized" => true,
"architecture" => "x86_64",
"os" => {
"kernel" => "3.10.0-693.el7.x86_64",
"codename" => "Maipo",
"family" => "redhat",
"platform" => "rhel",
"version" => "7.4 (Maipo)",
"name" => "Red Hat Enterprise Linux Server"
},
"id" => "b441ff6952f647e7a366c69db8ea6664",
"name" => "node1"
},
"ident" => "-",
"timestamp" => "04/Jan/2015:05:27:05 +0000",
"auth" => "-",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"referrer" => "\"-\"",
"@version" => "1",
"response" => "200",
"httpversion" => "1.1",
"message" => "218.30.103.62 - - [04/Jan/2015:05:27:05 +0000] \"GET /robots.txt HTTP/1.1\" 200 - \"-\" \"Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)\"",
"clientip" => "218.30.103.62",
"geoip" => {
"region_code" => "BJ",
"latitude" => 39.9288,
"ip" => "218.30.103.62",
"location" => {
"lat" => 39.9288,
"lon" => 116.3889
},
"region_name" => "Beijing",
"longitude" => 116.3889,
"city_name" => "Beijing",
"timezone" => "Asia/Shanghai",
"country_code3" => "CN",
"country_code2" => "CN",
"country_name" => "China",
"continent_code" => "AS"
},
"@timestamp" => 2019-05-07T03:43:18.368Z,
"log" => {
"file" => {
"path" => "/itoa/elastic-stack/test-cas/logstash-demo/logstash-tutorial.log"
},
"offset" => 19301
}
}
複製代碼
consumers_thread
配置文件默認沒有開啓轉義,因此`\t`解析不了,須要去配置文件中修改這個配置。
它在關閉前會執行一些操做
不安全的關閉會丟數