ELK測試

ELK 日誌分析系統

        ELK 日誌分析系統
            1.0 ELK 介紹
            1.1 ELK 安裝準備工做
            1.2 es 安裝
            1.3 es配置
            1.4 es測試
            1.5 Kibana安裝
            1.6 logstash安裝
            1.7 logstash配置解析rsyslog文件
            1.8 kibana查看日誌
            1.9 nginx日誌收集
            2.0 beats採集日誌

1.0 ELK 介紹

官網https://www.elastic.co/cn/

中文指南https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details

ELK Stack (5.0版本以後) Elastic Stack == (ELK Stack + Beats)

ELK Stack包含:ElasticSearch、Logstash、Kibana

ElasticSearch是一個搜索引擎,用來搜索、分析、存儲日誌。它是分佈式的,也就是說能夠橫向擴容,能夠自動發現,索引自動分片,總之很強大。文檔https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html

Logstash用來採集日誌,把日誌解析爲json格式交給ElasticSearch。

Kibana是一個數據可視化組件,把處理後的結果經過web界面展現

Beats在這裏是一個輕量級日誌採集器,其實Beats家族有5個成員
早期的ELK架構中使用Logstash收集、解析日誌,可是Logstash對內存、cpu、io等資源消耗比較高。相比 
Logstash,Beats所佔系統的CPU和內存幾乎能夠忽略不計

x-pack對Elastic Stack提供了安全、警報、監控、報表、圖表於一身的擴展包,是收費的

1.1 ELK 安裝準備工做

環境準備:192.168.137.30、192.168.137.40、192.168.137.45
// 三臺機器均安裝Elaelasticsearch(後續簡稱es)、jdk8,設置hosts 
主節點:
192.168.137.30
數據節點:
192.168.137.40、192.168.137.45
全部節點均安裝jdk環境
yum install -y java-1.8.0-openjdk

1.2 es 安裝

官方文檔 https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
三臺機器均都要執行如下操做
[root@linux-node3 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@linux-node3 ~]# cat /etc/yum.repos.d/elastic.repo

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[root@linux-node3 ~]# yum install -y elasticsearch
或者方法安裝
[root@linux-node3 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
[root@linux-node3 ~]# rpm -ivh elasticsearch-6.0.0.rpm

1.3 es配置

elasticsearch配置文件/etc/elasticsearch和/etc/sysconfig/elasticsearch 
參考https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html 
主節點:192.168.137.30編輯配置文件

[root@linux-node3 ~]# cat /etc/elasticsearch/elasticsearch.yml 
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: linux-node3.com
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: linux-node3.com
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
 node.master: true
 node.data: false

#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.137.30
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.137.30", "192.168.137.40", "192.168.137.45"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
[root@linux-node3 ~]#

同理修改數據節點配置:

[root@linux-node4 ~]# cat /etc/elasticsearch/elasticsearch.yml 
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: linux-node3.com
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: linux-node4.com
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
 node.master: false
 node.data: true

#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.137.40
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.137.30", "192.168.137.40", "192.168.137.45"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
[root@linux-node4 ~]# 

三臺均啓動服務

[root@linux-node3 ~]# systemctl start elasticsearch
[root@linux-node3 ~]# ps -aux |grep elasticsearch
elastic+   3140 23.7 45.9 1482312 459248 ?      Ssl  14:29   0:00 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root       3168  5.0  0.0 112680   716 pts/1    S+   14:29   0:00 grep --color=auto elasticsearch
[root@linux-node3 ~]# 
[root@linux-node3 ~]# netstat -lntnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      966/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1095/master         
tcp        0      0 192.168.137.30:27017    0.0.0.0:*               LISTEN      1006/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1006/mongod         
tcp6       0      0 192.168.137.30:9200     :::*                    LISTEN      1422/java           
tcp6       0      0 :::8080                 :::*                    LISTEN      1185/java           
tcp6       0      0 :::80                   :::*                    LISTEN      961/httpd           
tcp6       0      0 192.168.137.30:9300     :::*                    LISTEN      1422/java           
tcp6       0      0 :::22                   :::*                    LISTEN      966/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1095/master         
[root@linux-node3 ~]# 
// 依次啓動數據節點。端口分別爲9200,9300
[root@linux-05 ~]# ss -ltnp |grep -E '9200|9300'
LISTEN     0      128      ::ffff:192.168.137.45:9200                    :::*                   users:(("java",pid=2758,fd=118))
LISTEN     0      128      ::ffff:192.168.137.45:9300                    :::*                   users:(("java",pid=2758,fd=108))
[root@linux-05 ~]#
[root@linux-node4 ~]# ss -ltnp |grep -E '9200|9300'
LISTEN     0      128      ::ffff:192.168.137.40:9200                    :::*                   users:(("java",pid=3257,fd=119))
LISTEN     0      128      ::ffff:192.168.137.40:9300                    :::*                   users:(("java",pid=3257,fd=110))

1.4 es測試

健康檢查

[root@linux-node3 ~]# curl '192.168.137.30:9200/_cluster/health?pretty'
{
  "cluster_name" : "linux-node3.com",
  "status" : "green",   //健康狀態
  "timed_out" : false,
  "number_of_nodes" : 3,     //3個節點
  "number_of_data_nodes" : 2,  //2個數據節點
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[root@linux-node3 ~]#

集羣詳細信息

[root@linux-node3 ~]# curl '192.168.137.30:9200/_cluster/state?pretty' 
{
  "cluster_name" : "linux-node3.com",
  "compressed_size_in_bytes" : 355,
  "version" : 5,
  "state_uuid" : "RBH5dvstTyqgHVVSdfNi_Q",
  "master_node" : "d1yLa9f9RfSPwUXPvm_lqQ",
  "blocks" : { },
  "nodes" : {
    "d1yLa9f9RfSPwUXPvm_lqQ" : {
      "name" : "linux-node3.com",
      "ephemeral_id" : "DGs6lBiaQvaJlmyasez-TA",
      "transport_address" : "192.168.137.30:9300",
      "attributes" : { }
    },
    "pyOddTkYRN6fRjWjb-ehBw" : {
      "name" : "linux-05.com",
      "ephemeral_id" : "X8oa-yozSxqVmb6Dp2fhAQ",
      "transport_address" : "192.168.137.45:9300",
      "attributes" : { }
    },
    "mf7rEM3oScqEOqNFniEJfA" : {
      "name" : "linux-node4.com",
      "ephemeral_id" : "eZ4jATDJRDyv3rmnup3zfg",
      "transport_address" : "192.168.137.40:9300",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "3_2FFY-XTPexeDEZ6MXR1Q",
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : {
      "mf7rEM3oScqEOqNFniEJfA" : [ ],
      "pyOddTkYRN6fRjWjb-ehBw" : [ ]
    }
  },
  "restore" : {
    "snapshots" : [ ]
  },
  "snapshots" : {
    "snapshots" : [ ]
  },
  "snapshot_deletions" : {
    "snapshot_deletions" : [ ]
  }
}
[root@linux-node3 ~]#

1.5 Kibana安裝

主節點安裝:kibana
[root@linux-node3 ~]#  yum install -y kibana //會很慢
[root@linux-node ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
[root@linux-node3 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm 
準備中...                          ################################# [100%]
正在升級/安裝...
1:kibana-6.0.0-1                   ################################# [100%]
[root@linux-node3 ~]# 
[root@linux-node3 ~]# grep -v "^#" /etc/kibana/kibana.yml 
server.port: 5601  //監聽端口
server.host: "192.168.137.30"

elasticsearch.url: "http://192.168.137.30:9200" //es訪問地址

logging.dest: /var/log/kibana.log

[root@linux-node3 ~]#
[root@linux-node3 log]# touch kibana.log
[root@linux-node3 log]# chmod 777 kibana.log 
[root@linux-node3 log]# systemctl restart kibana
[root@linux-node3 log]# ps aux | grep kibana
kibana     1626 39.5 11.6 1121852 116968 ?      Ssl  17:09   0:04 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root       1638  0.0  0.0 112680   980 pts/0    R+   17:10   0:00 grep --color=auto kibana
[root@linux-node3 log]# netstat -lntnp | grep nod
tcp        0      0 192.168.137.30:5601     0.0.0.0:*               LISTEN      1626/node           
[root@linux-node3 log]#
瀏覽器裏訪問 http://192.168.137.30:5601

1.6 logstash安裝

數據節點安裝: logstash
[root@linux-node4 ~]# yum install -y  logstash 或者使用
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
rpm -ivh logstash-6.0.0.rpm 
[root@linux-node4 ~]# rpm -ivh logstash-6.0.0.rpm 
準備中...                          ################################# [100%]
正在升級/安裝...
1:logstash-1:6.0.0-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Successfully created system startup script for Logstash

1.7 logstash配置解析rsyslog文件

[root@linux-node4 ~]# cat /etc/logstash/conf.d/syslog.conf

 input {
  syslog {
    type => "system-syslog"
    port => 10514  
  }
}
output {
  stdout {
    codec => rubydebug
  }
}

[root@linux-node4 ~]# 
檢測配置文件是否有錯
[root@linux-node4 ~]# cd /usr/share/logstash/bin
[root@linux-node4 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
啓動logstash
vim /etc/rsyslog.conf//在#### RULES下面增長一行
*.* @@192.168.137.40:10514
[root@linux-node4 ~]# systemctl restart rsyslog
[root@linux-node4 ~]# netstat -lnp |grep 10514
tcp6       0      0 :::10514                :::*                    LISTEN      3708/java           
udp        0      0 0.0.0.0:10514           0.0.0.0:*                           3708/java
[root@linux-node4 bin]#  ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
          "severity" => 6,
           "program" => "rsyslogd",
           "message" => "[origin software=\"rsyslogd\" swVersion=\"7.4.7\" x-pid=\"3768\" x-info=\"http://www.rsyslog.com\"] start\n",
              "type" => "system-syslog",
          "priority" => 46,
         "logsource" => "linux-node4",
        "@timestamp" => 2017-12-14T10:07:03.000Z,
          "@version" => "1",
              "host" => "192.168.137.40",
          "facility" => 5,
    "severity_label" => "Informational",
         "timestamp" => "Dec 14 18:07:03",
    "facility_label" => "syslogd"
}
{
          "severity" => 6,
           "program" => "systemd",
           "message" => "Stopping System Logging Service...\n",
              "type" => "system-syslog",
          "priority" => 30,
         "logsource" => "linux-node4",
        "@timestamp" => 2017-12-14T10:07:03.000Z,
          "@version" => "1",
              "host" => "192.168.137.40",
          "facility" => 3,
    "severity_label" => "Informational",
         "timestamp" => "Dec 14 18:07:03",
    "facility_label" => "system"
}
{
          "severity" => 6,
           "program" => "systemd",
           "message" => "Starting System Logging Service...\n",
              "type" => "system-syslog",
          "priority" => 30,
         "logsource" => "linux-node4",
        "@timestamp" => 2017-12-14T10:07:03.000Z,
          "@version" => "1",
              "host" => "192.168.137.40",
          "facility" => 3,
    "severity_label" => "Informational",
         "timestamp" => "Dec 14 18:07:03",
    "facility_label" => "system"
}
{
          "severity" => 6,
           "program" => "systemd",
           "message" => "Started System Logging Service.\n",
              "type" => "system-syslog",
          "priority" => 30,
         "logsource" => "linux-node4",
        "@timestamp" => 2017-12-14T10:07:03.000Z,
          "@version" => "1",
              "host" => "192.168.137.40",
          "facility" => 3,
    "severity_label" => "Informational",
         "timestamp" => "Dec 14 18:07:03",
    "facility_label" => "system"
}
{
          "severity" => 5,
               "pid" => "654",
           "program" => "polkitd",
           "message" => "Unregistered Authentication Agent for unix-process:3761:2437779 (system bus name :1.47, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) (disconnected from bus)\n",
              "type" => "system-syslog",
          "priority" => 85,
         "logsource" => "linux-node4",
        "@timestamp" => 2017-12-14T10:07:03.000Z,
          "@version" => "1",
              "host" => "192.168.137.40",
          "facility" => 10,
    "severity_label" => "Notice",
         "timestamp" => "Dec 14 18:07:03",
    "facility_label" => "security/authorization"
}

// 注:啓動後不能敲命令,屏幕上查看到日誌輸出

1.8 kibana查看日誌

數據節點配置日誌收集:
啓動logstash
[root@linux-node4 ~]# cat /etc/logstash/conf.d/syslog.conf

input {
  syslog {
    type => "system-syslog"
    port => 10514  
  }
}
output {
  elasticsearch {
  hosts => ["192.168.137.30:9200"]
  index => "system-syslog-%{+YYYY.MM}" 
  }
}

[root@linux-node4 ~]# chown -R logstash /var/lib/logstash
[root@linux-node4 ~]# systemctl start logstash
[root@linux-node4 ~]#
[root@linux-node4 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      963/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1238/master         
tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      13890/java          
tcp6       0      0 :::10514                :::*                    LISTEN      14164/java          
tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      13890/java          
tcp6       0      0 :::22                   :::*                    LISTEN      963/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1238/master         
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      14164/java 
// 默認是127.0.0.1:9600,修改
[root@linux-node4 ~]# grep -v "^#" /etc/logstash/logstash.yml 
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf


http.host: "192.168.137.40"
path.logs: /var/log/logstash

[root@linux-node4 ~]#
[root@linux-node4 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      965/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1224/master         
tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      2215/java           
tcp6       0      0 :::10514                :::*                    LISTEN      5450/java           
tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      2215/java           
tcp6       0      0 :::22                   :::*                    LISTEN      965/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1224/master         
tcp6       0      0 192.168.137.40:9600     :::*                    LISTEN      5450/java
主節點查看索引信息
[root@linux-node3 ~]# curl '192.168.137.30:9200/_cat/indices?v' //能夠獲取索引信息
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana               Z7JVUVlLSRSu5xqySplQ5w   1   1          1            0      6.9kb          3.4kb
green  open   system-syslog-2017.12 c9ZmYijTTYSMLMIESb3N4Q   5   1          1            0     24.5kb         12.2kb
[root@linux-node3 ~]#

    [root@linux-node3 ~]# curl -XGET '192.168.137.30:9200/indexname?pretty' 
{  // 獲指定索引詳細信息
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index",
        "resource.type" : "index_or_alias",
        "resource.id" : "indexname",
        "index_uuid" : "_na_",
        "index" : "indexname"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index",
    "resource.type" : "index_or_alias",
    "resource.id" : "indexname",
    "index_uuid" : "_na_",
    "index" : "indexname"
  },
  "status" : 404
}

[root@linux-node3 ~]#
curl -XDELETE 'localhost:9200/logstash-xxx-*' 能夠刪除指定索引
瀏覽器訪問192.168.137.40:5601,到kibana配置索引
左側點擊「Managerment」-> 「Index Patterns」-> 「Create Index Pattern」
Index pattern這裏須要根據前面curl查詢到的索引名字來寫,不然下面的按鈕是沒法點擊
輸入:system-syslog-2017.12 或者system-syslog-*

1.9 nginx日誌收集

[root@linux-node4 ~]# cat /etc/logstash/conf.d/nginx.conf

input {
  file {
    path => "/tmp/elk_access.log"
    start_position => "beginning"
    type => "nginx"
  }
}
filter {
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
    }
    geoip {
        source => "clientip"
    }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["192.168.137.40:9200"]
	index => "nginx-test-%{+YYYY.MM.dd}"
  }
}

[root@linux-node4 ~]# cd /usr/share/logstash/bin
[root@linux-node4 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK 
//沒有nginx的須要安裝nginx
[root@linux-node4 ~]# yum -y install nginx
[root@linux-node4 ~]# cat /etc/nginx/conf.d/elk.conf

server {
            listen 80;
            server_name elk.linux.com;

            location / {
                proxy_pass      http://192.168.137.30:5601;
                proxy_set_header Host   $host;
                proxy_set_header X-Real-IP      $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            }
            access_log  /tmp/elk_access.log main2;
        }

[root@linux-node4 ~]#
配置日誌
vim /etc/nginx/nginx.conf//增長以下內容

log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$upstream_addr" $request_time';

[root@linux-node4 ~]# systemctl start nginx
[root@linux-node4 ~]# ps -ef | grep nginx
root       2916      1  0 14:48 ?        00:00:00 nginx: master process /usr/sbin/nginx
nginx      2917   2916  0 14:48 ?        00:00:00 nginx: worker process
root       2919   2732  0 14:48 pts/4    00:00:00 grep --color=auto nginx
[root@linux-node4 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2916/nginx: master  
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      972/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1247/master         
tcp6       0      0 :::80                   :::*                    LISTEN      2916/nginx: master  
tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      2214/java           
tcp6       0      0 :::10514                :::*                    LISTEN      2304/java           
tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      2214/java           
tcp6       0      0 :::22                   :::*                    LISTEN      972/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1247/master         
tcp6       0      0 192.168.137.40:9600     :::*                    LISTEN      2304/java           
[root@linux-node4 ~]#
綁定hosts 192.168.137.40 elk.linux.com
瀏覽器訪問,檢查是否有日誌產生
[root@linux-node4 ~]# systemctl restart logstash  

主節點查看獲取的索引
[root@linux-node3 ~]# curl '192.168.137.30:9200/_cat/indices?v' 
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana               Z7JVUVlLSRSu5xqySplQ5w   1   1          2            0     20.4kb         10.2kb
green  open   system-syslog-2017.12 c9ZmYijTTYSMLMIESb3N4Q   5   1        127            0    723.8kb        317.6kb
green  open   nginx-test-2017.12.18 w3j3J-wXT6eXzaVf6ycmBg   5   1         20            0     42.7kb           466b
[root@linux-node3 ~]#
// 檢查是否有nginx-test開頭的索引生成 
若是有,才能到kibana裏去配置該索引
左側點擊「Managerment」-> 「Index Patterns」-> 「Create Index Pattern」
Index pattern這裏寫nginx-test-*
以後點擊左側的Discover

2.0 beats採集日誌

官網:https://www.elastic.co/cn/products/beats
優勢:可擴展,支持自定義構建
數據節點:linux-05.com
[root@linux-05 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
[root@linux-05 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm 
Preparing...                          ################################# [100%]
Updating / installing...
1:filebeat-6.0.0-1                 ################################# [100%]
[root@linux-05 ~]#
編輯配置文件
[root@linux-05 ~]# grep  -v "^#" /etc/filebeat/filebeat.yml  | grep -v "#" |grep -v "^$"

filebeat.prospectors:
- type: log
  paths:
    - /var/log/messages
output.console:
  enable: true

/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml //能夠在屏幕上看到對應的日誌信息

[root@linux-05 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml 
^C[root@linux-05 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml 
{"@timestamp":"2017-12-18T07:32:01.785Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"prospector":{"type":"log"},"beat":{"name":"linux-05.com","hostname":"linux-05.com","version":"6.0.0"},"message":"Dec 18 12:30:01 linux-05 systemd: Started Session 6 of user root.","source":"/var/log/messages","offset":66}
{"@timestamp":"2017-12-18T07:32:01.785Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"offset":133,"message":"Dec 18 12:30:01 linux-05 systemd: Starting Session 6 of user root.","prospector":{"type":"log"},"beat":{"version":"6.0.0","name":"linux-05.com","hostname":"linux-05.com"},"source":"/var/log/messages"}...........

再編輯配置文件
vim /etc/filebeat/filebeat.yml //增長或者更改

filebeat.prospectors:
- input_type: log 
  paths:
    - /var/log/messages
output.elasticsearch:
  hosts: ["192.168.137.30:9200"]

[root@linux-05 ~]# systemctl start  filebeat
[root@linux-05 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      933/sshd            
tcp        0      0 192.168.137.45:27017    0.0.0.0:*               LISTEN      1061/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1061/mongod         
tcp6       0      0 :::3306                 :::*                    LISTEN      1454/mysqld         
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 192.168.137.45:9200     :::*                    LISTEN      888/java            
tcp6       0      0 192.168.137.45:9300     :::*                    LISTEN      888/java            
tcp6       0      0 :::22                   :::*                    LISTEN      933/sshd            
[root@linux-05 ~]# ps aux |grep filebeat
root       5123  0.0  1.2 277436 12324 ?        Ssl  15:45   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebea
root       5140  0.0  0.0 112652   964 pts/3    R+   15:46   0:00 grep --color=auto filebeat
[root@linux-05 ~]#


拓展
x-pack 收費,免費  http://www.jianshu.com/p/a49d93212eca
https://www.elastic.co/subscriptions
Elastic stack演進  http://70data.net/1505.html
基於kafka和elasticsearch,linkedin構建實時日誌分析系統 http://t.cn/RYffDoE  
使用redis http://blog.lishiming.net/?p=463
ELK+Filebeat+Kafka+ZooKeeper 構建海量日誌分析平臺  https://www.cnblogs.com/delgyd/p/elk.html
http://www.jianshu.com/p/d65aed756587
相關文章
相關標籤/搜索