官網地址:https://www.elastic.co/cn/html
官網權威指南:java
https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.htmlnode
安裝指南:mysql
https://www.elastic.co/guide/en/elasticsearch/reference/5.x/rpm.htmllinux
ELK是Elasticsearch、Logstash、Kibana的簡稱,這三者是核心套件,但並不是所有。ios
Elasticsearch是實時全文搜索和分析引擎,提供蒐集、分析、存儲數據三大功能;是一套開放REST和JAVA API等結構提供高效搜索功能,可擴展的分佈式系統。它構建於Apache Lucene搜索引擎庫之上。nginx
Logstash是一個用來蒐集、分析、過濾日誌的工具。它支持幾乎任何類型的日誌,包括系統日誌、錯誤日誌和自定義應用程序日誌。它能夠從許多來源接收日誌,這些來源包括 syslog、消息傳遞(例如 RabbitMQ)和JMX,它可以以多種方式輸出數據,包括電子郵件、websockets和Elasticsearch。git
Kibana是一個基於Web的圖形界面,用於搜索、分析和可視化存儲在 Elasticsearch指標中的日誌數據。它利用Elasticsearch的REST接口來檢索數據,不只容許用戶建立他們本身的數據的定製儀表板視圖,還容許他們以特殊的方式查詢和過濾數據github
Centos6.5 兩臺 IP:192.168.1.202 安裝: elasticsearch、logstash、Kibana、Nginx、Http、Redis 192.168.1.201 安裝: logstash
安裝elasticsearch的yum源的密鑰(這個須要在全部服務器上都配置) # rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 配置elasticsearch的yum源 # vim /etc/yum.repos.d/elasticsearch.repo 在elasticsearch.repo文件中添加以下內容 [elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
安裝elasticsearch # yum install -y elasticsearch 安裝java環境(java環境必須是1.8版本以上的) wget http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm rpm -ivh jdk-8u131-linux-x64.rpm 驗證java安裝成功 java -version java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
建立elasticsearch data的存放目錄,並修改該目錄的屬主屬組web
# mkdir -p /data/es-data (自定義用於存放data數據的目錄) # chown -R elasticsearch:elasticsearch /data/es-data
修改elasticsearch的日誌屬主屬組
# chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
修改elasticsearch的配置文件
# vim /etc/elasticsearch/elasticsearch.yml# yum install -y elasticsearch 找到配置文件中的cluster.name,打開該配置並設置集羣名稱 cluster.name: demon 找到配置文件中的node.name,打開該配置並設置節點名稱 node.name: elk-1 修改data存放的路徑 path.data: /data/es-data 修改logs日誌的路徑 path.logs: /var/log/elasticsearch/ 配置內存使用用交換分區 bootstrap.memory_lock: true 監聽的網絡地址 network.host: 0.0.0.0 開啓監聽的端口 http.port: 9200 增長新的參數,這樣head插件能夠訪問es (5.x版本,若是沒有能夠本身手動加) http.cors.enabled: true http.cors.allow-origin: "*" 啓動elasticsearch服務
啓動服務
/etc/init.d/elasticsearch start Starting elasticsearch: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/hs_err_pid2616.log [FAILED] 這個報錯是由於默認使用的內存大小爲2G,虛擬機沒有那麼多的空間 修改參數: vim /etc/elasticsearch/jvm.options -Xms512m -Xmx512m 再次啓動 /etc/init.d/elasticsearch start 查看服務狀態,若是有報錯能夠去看錯誤日誌 less /var/log/elasticsearch/demon.log(日誌的名稱是以集羣名稱命名的) 建立開機自啓動服務 # chkconfig elasticsearch on
須要修改幾個參數,否則啓動會報錯 vim /etc/security/limits.conf 在末尾追加如下內容(elk爲啓動用戶,固然也能夠指定爲*) elk soft nofile 65536 elk hard nofile 65536 elk soft nproc 2048 elk hard nproc 2048 elk soft memlock unlimited elk hard memlock unlimited 繼續再修改一個參數 vim /etc/security/limits.d/90-nproc.conf 將裏面的1024改成2048(ES最少要求爲2048) * soft nproc 2048 另外還需注意一個問題(在日誌發現以下內容,這樣也會致使啓動失敗,這一問題困擾了好久) [2017-06-14T19:19:01,641][INFO ][o.e.b.BootstrapChecks ] [elk-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2017-06-14T19:19:01,658][ERROR][o.e.b.Bootstrap ] [elk-1] node validation exception [1] bootstrap checks failed [1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk 解決:修改配置文件,在配置文件添加一項參數(目前還沒明白此參數的做用) vim /etc/elasticsearch/elasticsearch.yml bootstrap.system_call_filter: false
經過瀏覽器請求下9200的端口,看下是否成功
先檢查9200端口是否起來 netstat -antp |grep 9200 tcp 0 0 :::9200 :::* LISTEN 2934/java 瀏覽器訪問測試是否正常(如下爲正常) # curl http://127.0.0.1:9200/ { "name" : "linux-node1", "cluster_name" : "demon", "cluster_uuid" : "kM0GMFrsQ8K_cl5Fn7BF-g", "version" : { "number" : "5.4.0", "build_hash" : "780f8c4", "build_date" : "2017-04-28T17:43:27.229Z", "build_snapshot" : false, "lucene_version" : "6.5.0" }, "tagline" : "You Know, for Search" }
JavaAPI RESTful API Javascript,.Net,PHP,Perl,Python 利用API查看狀態 # curl -i -XGET 'localhost:9200/_count?pretty' HTTP/1.1 200 OK content-type: application/json; charset=UTF-8 content-length: 95 { "count" : 0, "_shards" : { "total" : 0, "successful" : 0, "failed" : 0 } }
安裝elasticsearch-head插件 安裝docker鏡像或者經過github下載elasticsearch-head項目都是能夠的,1或者2兩種方式選擇一種安裝使用便可 1. 使用docker的集成好的elasticsearch-head # docker run -p 9100:9100 mobz/elasticsearch-head:5 docker容器下載成功並啓動之後,運行瀏覽器打開http://localhost:9100/ 2. 使用git安裝elasticsearch-head # yum install -y npm # git clone git://github.com/mobz/elasticsearch-head.git # cd elasticsearch-head # npm install # npm run start 檢查端口是否起來 netstat -antp |grep 9100 瀏覽器訪問測試是否正常 http://IP:9100/
安裝Logstash環境: 官方安裝手冊: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html 下載yum源的密鑰認證: # rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 利用yum安裝logstash # yum install -y logstash 查看下logstash的安裝目錄 # rpm -ql logstash 建立一個軟鏈接,每次執行命令的時候不用在寫安裝路勁(默認安裝在/usr/share下) ln -s /usr/share/logstash/bin/logstash /bin/ 執行logstash的命令 # logstash -e 'input { stdin { } } output { stdout {} }' 運行成功之後輸入: nihao stdout返回的結果:
注: -e 執行操做 input 標準輸入 { input } 插件 output 標準輸出 { stdout } 插件 經過rubydebug來輸出下更詳細的信息 # logstash -e 'input { stdin { } } output { stdout {codec => rubydebug} }' 執行成功輸入: nihao stdout輸出的結果:
若是標準輸出還有elasticsearch中都須要保留應該怎麼玩,看下面 # /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["192.168.1.202:9200"] } stdout { codec => rubydebug }}' 運行成功之後輸入: I am elk 返回的結果(標準輸出中的結果):
官方指南: https://www.elastic.co/guide/en/logstash/current/configuration.html 建立配置文件01-logstash.conf # vim /etc/logstash/conf.d/elk.conf 文件中添加如下內容 input { stdin { } } output { elasticsearch { hosts => ["192.168.1.202:9200"] } stdout { codec => rubydebug } } 使用配置文件運行logstash # logstash -f ./elk.conf 運行成功之後輸入以及標準輸出結果
1. Input插件 權威指南:https://www.elastic.co/guide/en/logstash/current/input-plugins.html file插件的使用 # vim /etc/logstash/conf.d/elk.conf 添加以下配置 input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } } output { elasticsearch { hosts => ["192.168.1.202:9200"] index => "system-%{+YYYY.MM.dd}" } } 運行logstash指定elk.conf配置文件,進行過濾匹配 #logstash -f /etc/logstash/conf.d/elk.conf
來一發配置安全日誌的而且把日誌的索引按類型作存放,繼續編輯elk.conf文件
# vim /etc/logstash/conf.d/elk.conf 添加secure日誌的路徑 input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } } 運行logstash指定elk.conf配置文件,進行過濾匹配 # logstash -f ./elk.conf
這些設置都沒有問題以後,接下來安裝下kibana,可讓在前臺展現
安裝kibana環境 官方安裝手冊:https://www.elastic.co/guide/en/kibana/current/install.html 下載kibana的tar.gz的軟件包 # wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.0-linux-x86_64.tar.gz 解壓kibana的tar包 # tar -xzf kibana-5.4.0-linux-x86_64.tar.gz 進入解壓好的kibana # mv kibana-5.4.0-linux-x86_64 /usr/local 建立kibana的軟鏈接 # ln -s /usr/local/kibana-5.4.0-linux-x86_64/ /usr/local/kibana 編輯kibana的配置文件 # vim /usr/local/kibana/config/kibana.yml 修改配置文件以下,開啓如下的配置 server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://192.168.1.202:9200" kibana.index: ".kibana" 安裝screen,以便於kibana在後臺運行(固然也能夠不用安裝,用其餘方式進行後臺啓動) # yum -y install screen # screen # /usr/local/kibana/bin/kibana netstat -antp |grep 5601 tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 17007/node 打開瀏覽器並設置對應的index http://IP:5601
好,如今索引也能夠建立了,如今能夠來輸出nginx、apache、message、secrue的日誌到前臺展現(Nginx有的話直接修改,沒有自行安裝)
編輯nginx配置文件,修改如下內容(在http模塊下添加) log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' '}'; 修改access_log的輸出格式爲剛纔定義的json access_log logs/elk.access.log json; 繼續修改apache的配置文件 LogFormat "{ \ \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \ \"@version\": \"1\", \ \"tags\":[\"apache\"], \ \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \ \"clientip\": \"%a\", \ \"duration\": %D, \ \"status\": %>s, \ \"request\": \"%U%q\", \ \"urlpath\": \"%U\", \ \"urlquery\": \"%q\", \ \"bytes\": %B, \ \"method\": \"%m\", \ \"site\": \"%{Host}i\", \ \"referer\": \"%{Referer}i\", \ \"useragent\": \"%{User-agent}i\" \ }" ls_apache_json 同樣修改輸出格式爲上面定義的json格式 CustomLog logs/access_log ls_apache_json 編輯logstash配置文件,進行日誌收集 vim /etc/logstash/conf.d/full.conf input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/httpd/access_log" type => "http" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } if [type] == "http" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-http-%{+YYYY.MM.dd}" } } if [type] == "nginx" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-nginx-%{+YYYY.MM.dd}" } } } 運行看看效果如何 logstash -f /etc/logstash/conf.d/full.conf
能夠發現全部建立日誌的索引都已存在,接下來就去Kibana建立日誌索引,進行展現(按照上面的方法進行建立索引便可),看下展現的效果
接下來再來一發MySQL慢日誌的展現
因爲MySQL的慢日誌查詢格式比較特殊,因此須要用正則進行匹配,並使用multiline可以進行多行匹配(看具體配置) input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/httpd/access_log" type => "http" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } file { path => "/var/log/mysql/mysql.slow.log" type => "mysql" start_position => "beginning" codec => multiline { pattern => "^# User@Host:" negate => true what => "previous" } } } filter { grok { match => { "message" => "SELECT SLEEP" } add_tag => [ "sleep_drop" ] tag_on_failure => [] } if "sleep_drop" in [tags] { drop {} } grok { match => { "message" => "(?m)^# User@Host: %{USER:User}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:Client_IP})?\]\s.*# Query_time: %{NUMBER:Query_Time:float}\s+Lock_time: %{NUMBER:Lock_Time:float}\s+Rows_sent: %{NUMBER:Rows_Sent:int}\s+Rows_examined: %{NUMBER:Rows_Examined:int}\s*(?:use %{DATA:Database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<Query>(?<Action>\w+)\s+.*)\n# Time:.*$" } } date { match => [ "timestamp", "UNIX" ] remove_field => [ "timestamp" ] } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } if [type] == "http" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-http-%{+YYYY.MM.dd}" } } if [type] == "nginx" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-nginx-%{+YYYY.MM.dd}" } } if [type] == "mysql" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-mysql-slow-%{+YYYY.MM.dd}" } } }
查看效果(一條慢日誌查詢會顯示一條,若是不進行正則匹配,那麼一行就會顯示一條)
具體的日誌輸出需求,進行具體的分析
安裝reids # yum install -y redis 修改redis的配置文件 # vim /etc/redis.conf 修改內容以下 daemonize yes bind 192.168.1.202 啓動redis服務 # /etc/init.d/redis restart 測試redis的是否啓用成功 # redis-cli -h 192.168.1.202 輸入info若是有不報錯便可 redis 192.168.1.202:6379> info redis_version:2.4.10 .... 編輯配置redis-out.conf配置文件,把標準輸入的數據存儲到redis中 # vim /etc/logstash/conf.d/redis-out.conf 添加以下內容 input { stdin {} } output { redis { host => "192.168.1.202" port => "6379" password => 'test' db => '1' data_type => "list" key => 'elk-test' } } 運行logstash指定redis-out.conf的配置文件 # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功之後,在logstash中輸入內容(查看下效果)
編輯配置redis-in.conf配置文件,把reids的存儲的數據輸出到elasticsearch中 # vim /etc/logstash/conf.d/redis-out.conf 添加以下內容 input{ redis { host => "192.168.1.202" port => "6379" password => 'test' db => '1' data_type => "list" key => 'elk-test' batch_count => 1 #這個值是指從隊列中讀取數據時,一次性取出多少條,默認125條(若是redis中沒有125條,就會報錯,因此在測試期間加上這個值) } } output { elasticsearch { hosts => ['192.168.1.202:9200'] index => 'redis-test-%{+YYYY.MM.dd}' } } 運行logstash指定redis-in.conf的配置文件 # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功之後,在logstash中輸入內容(查看下效果)
把以前的配置文件修改一下,變成全部的日誌監控的來源文件都存放到redis中,而後經過redis在輸出到elasticsearch中 更改成以下,編輯full.conf input { file { path => "/var/log/httpd/access_log" type => "http" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/messages" type => "system" start_position => "beginning" } } output { if [type] == "http" { redis { host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_http' } } if [type] == "nginx" { redis { host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_nginx' } } if [type] == "secure" { redis { host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_secure' } } if [type] == "system" { redis { host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_system' } } } 運行logstash指定shipper.conf的配置文件 # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/full.conf 在redis中查看是否已經將數據寫到裏面(有時候輸入的日誌文件不產生日誌,會致使redis裏面也沒有寫入日誌)
把redis中的數據讀取出來,寫入到elasticsearch中(須要另一臺主機作實驗) 編輯配置文件 # vim /etc/logstash/conf.d/redis-out.conf 添加以下內容 input { redis { type => "system" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_system' batch_count => 1 } redis { type => "http" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_http' batch_count => 1 } redis { type => "nginx" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_nginx' batch_count => 1 } redis { type => "secure" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_secure' batch_count => 1 } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "http" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-http-%{+YYYY.MM.dd}" } } if [type] == "nginx" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-nginx-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } } 注意: input是從客戶端收集的 output是一樣也保存到192.168.1.202中的elasticsearch中,若是要保存到當前的主機上,能夠把output中的hosts修改爲localhost,若是還須要在kibana中顯示,須要在本機上部署kabana,爲什麼要這樣作,起到一個鬆耦合的目的 說白了,就是在客戶端收集日誌,寫到服務端的redis裏或是本地的redis裏面,輸出的時候對接ES服務器便可 運行命令看看效果 # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服務器輸出同樣的(這樣是先將日誌存到redis數據庫,而後再從redis數據庫裏取出日誌)
把redis中的數據讀取出來,寫入到elasticsearch中(須要另一臺主機作實驗) 編輯配置文件 # vim /etc/logstash/conf.d/redis-out.conf 添加以下內容 input { redis { type => "system" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_system' batch_count => 1 } redis { type => "http" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_http' batch_count => 1 } redis { type => "nginx" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_nginx' batch_count => 1 } redis { type => "secure" host => "192.168.1.202" password => 'test' port => "6379" db => "6" data_type => "list" key => 'nagios_secure' batch_count => 1 } } output { if [type] == "system" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-system-%{+YYYY.MM.dd}" } } if [type] == "http" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-http-%{+YYYY.MM.dd}" } } if [type] == "nginx" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-nginx-%{+YYYY.MM.dd}" } } if [type] == "secure" { elasticsearch { hosts => ["192.168.1.202:9200"] index => "nagios-secure-%{+YYYY.MM.dd}" } } } 注意: input是從客戶端收集的 output是一樣也保存到192.168.1.202中的elasticsearch中,若是要保存到當前的主機上,能夠把output中的hosts修改爲localhost,若是還須要在kibana中顯示,須要在本機上部署kabana,爲什麼要這樣作,起到一個鬆耦合的目的 說白了,就是在客戶端收集日誌,寫到服務端的redis裏或是本地的redis裏面,輸出的時候對接ES服務器便可 運行命令看看效果 # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服務器輸出同樣的(這樣是先將日誌存到redis數據庫,而後再從redis數據庫裏取出日誌)
1. 日誌分類 系統日誌 rsyslog logstash syslog插件 訪問日誌 nginx logstash codec json 錯誤日誌 file logstash mulitline 運行日誌 file logstash codec json 設備日誌 syslog logstash syslog插件 Debug日誌 file logstash json 或者 mulitline 2. 日誌標準化 路徑 固定 格式 儘可能json 3. 系統個日誌開始-->錯誤日誌-->運行日誌-->訪問日誌
由於ES保存日誌是永久保存,因此須要按期刪除一下日誌,下面命令爲刪除指定時間前的日誌
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`
https://mp.weixin.qq.com/s/aOg-pozv6BLri9ZqdLpdAg