ELK簡單安裝與配置

                     ELK簡單安裝與配置html

Elasticsearch是一個分佈式可擴展的實時搜索和分析引擎,它能幫助你搜索、分析和瀏覽數據;它是一個創建在全文搜索引擎 Apache Lucene(TM) 基礎上的搜索引擎,能夠說Lucene是當今最早進,最高效的全功能開源搜索引擎框架。java

1、環境介紹node

一、架構介紹linux

ELK是C/S架構的,因此這裏列出server和clientgit

server : centos 6.6 x86_64 IP: 10.0.90.24github

client : centos 6.6 x86_64 IP: 10.0.90.25express

安裝方式:這裏採用源碼包安裝,也可使用rpm包安裝!apache

二、軟件介紹json

服務端軟件:bootstrap

Elasticsearch:負責日誌檢索和分析,它的特色有:分佈式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等

Logstash:對日誌進行收集、過濾,並將其存儲供之後使用(如,搜索日誌)

Kibana:爲日誌分析提供友好的Web界面,能夠幫助彙總、分析和搜索重要數據日誌

客戶端軟件:

在須要收集日誌的全部服務上部署logstash,做爲logstash agent(logstash shipper)用於監控並過濾收集日誌,將過濾後的內容發送到logstash indexer,logstash indexer將日誌收集在一塊兒交給全文搜索服務ElasticSearch,能夠用ElasticSearch進行自定義搜索,而後經過Kibana來結合自定義搜索進行頁面展現。

原理圖:摘自網絡,鏈接忘記了,抱歉!

wKiom1cDL8mBgOXgAAEEtmlq6kU217.png

2、開始在server端安裝配置ELK

一、安裝jdk(下載地址http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)

因爲Logstash的運行依賴於Java環境, 而Logstash1.5以上版本依賴的java版本不低於java 1.7,所以推薦使用最新版本的Java。

咱們只須要Java的運行環境,因此能夠只安裝JRE,不過這裏我依然使用JDK,請自行搜索安裝。

#rpm -ivh jdk-8u77-linux-x64.rpm 

Preparing...                ########################################### [100%]

   1:jdk1.8.0_77            ########################################### [100%]

Unpacking JAR files...

        tools.jar...

        plugin.jar...

        javaws.jar...

        deploy.jar...

        rt.jar...

        jsse.jar...

        charsets.jar...

        localedata.jar...

        jfxrt.jar...

#java -version

java version "1.8.0_77"

Java(TM) SE Runtime Environment (build 1.8.0_77-b03)

Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

二、安裝elasticsearch、logstash、kibana 官網下載地址https://www.elastic.co/downloads/

我使用的版本信息以下:

elasticsearch(下載地址https://www.elastic.co/downloads/past-releases/elasticsearch-2-2-0)

logstash(下載地址https://www.elastic.co/downloads/past-releases/logstash-2-2-0)

kibana (下載地址https://www.elastic.co/downloads/past-releases/kibana-4-4-0)

安裝elasticsearch

tar xf elasticsearch-2.2.0.tar.gz -C /usr/local/

安裝elasticsearch的head插件

#cd /usr/local/elasticsearch-2.2.0

#./bin/plugin install mobz/elasticsearch-head

-> Installing mobz/elasticsearch-head...

Plugins directory [/usr/local/elasticsearch-2.2.0/plugins] does not exist. Creating...

Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...

Downloading .................................................................................................................................................................................................................................................................................................................................................................DONE

Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...

NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)

Installed head into /usr/local/elasticsearch-2.2.0/plugins/head

查看:

#ll plugins/

total 4

drwxr-xr-x 5 root root 4096 Mar 29 18:09 head

安裝elasticsearch的kopf插件

注:Elasticsearch-kopf插件能夠查詢Elasticsearch中的數據

#./bin/plugin install lmenezes/elasticsearch-kopf

-> Installing lmenezes/elasticsearch-kopf...

Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...

Downloading ...........................................................................................................................................................................................................................................................................................................................................................DONE

Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...

NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)

Installed kopf into /usr/local/elasticsearch-2.2.0/plugins/kopf

#ll  plugins/

total 8

drwxr-xr-x 5 search search 4096 Mar 29 18:09 head

drwxrwxr-x 8 search search 4096 Mar 30 18:10 kopf

建立elasticsearch的data和logs目錄

#mkdir /elasticsearch/data -pv

#mkdir /elasticsearch/logs -pv

編輯elasticsearch的配置文件

#cd config

備份一下

#cp elasticsearch.yml elasticsearch.yml_back

#vim elasticsearch.yml    --在末尾添加以下幾行,

cluster.name: es_cluster

node.name: node-1

path.data: /elasticsearch/data

path.logs: /elasticsearch/logs

network.host: 10.0.90.24    

network.port: 9200

啓動elasticsearch

#./bin/elasticsearch

Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

Refer to the log for complete error details.

提示不能以root用戶啓動,因此建立一個普通用戶,以普通用戶身份啓動elasticsearch

#groupadd search

#useradd -g search  search

將data和logs目錄的屬主和屬組改成search

#chown search.search /elasticsearch/ -R

從新啓動

#./bin/elasticsearch

[2016-03-29 19:58:20,026][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.scripts' (/usr/local/elasticsearch-2.2.0/config/scripts)

Likely root cause: java.nio.file.AccessDeniedException: /usr/local/elasticsearch-2.2.0/config/scripts

        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)

        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

        at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)

        at java.nio.file.Files.createDirectory(Files.java:674)

        at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)

        at java.nio.file.Files.createDirectories(Files.java:767)

        at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)

        at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)

        at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:248)

        at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:212)

        at org.elasticsearch.bootstrap.Security.configure(Security.java:118)

        at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

Refer to the log for complete error details.

報以上錯誤,緣由是權限的問題,修改權限

#chown search.search /usr/local/elasticsearch-2.2.0 -R

而後切換到search用戶啓動elasticsearch 

#su - search

$cd /usr/local/elasticsearch-2.2.0

$./bin/elasticsearch

[2016-03-29 20:11:20,243][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

[2016-03-29 20:11:20,409][INFO ][node                     ] [node-1] version[2.2.0], pid[2359], build[8ff36d1/2016-01-27T13:32:39Z]

[2016-03-29 20:11:20,409][INFO ][node                     ] [node-1] initializing ...

[2016-03-29 20:11:21,102][INFO ][plugins                  ] [node-1] modules [lang-expression, lang-groovy], plugins [head], sites [head]

[2016-03-29 20:11:21,118][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/sda3)]], net usable_space [24.5gb], net total_space [27.2gb], spins? [possibly], types [ext4]

[2016-03-29 20:11:21,118][INFO ][env                      ] [node-1] heap size [1007.3mb], compressed ordinary object pointers [true]

[2016-03-29 20:11:22,541][INFO ][node                     ] [node-1] initialized

[2016-03-29 20:11:22,542][INFO ][node                     ] [node-1] starting ...

[2016-03-29 20:11:22,616][INFO ][transport                ] [node-1] publish_address {10.0.90.24:9300}, bound_addresses {10.0.90.24:9300}

[2016-03-29 20:11:22,636][INFO ][discovery                ] [node-1] es_cluster/yNJhglX4RF-ydC4CWpFyTA

[2016-03-29 20:11:25,732][INFO ][cluster.service          ] [node-1] new_master {node-1}{yNJhglX4RF-ydC4CWpFyTA}{10.0.90.24}{10.0.90.24:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)

[2016-03-29 20:11:25,769][INFO ][http                     ] [node-1] publish_address {10.0.90.24:9200}, bound_addresses {10.0.90.24:9200}

[2016-03-29 20:11:25,770][INFO ][node                     ] [node-1] started

[2016-03-29 20:11:25,788][INFO ][gateway                  ] [node-1] recovered [0] indices into cluster_state

也能夠直接讓elasticsearch在後臺運行

$./bin/elasticsearch &

或者不中斷啓動(我這裏使用這種方式啓動)

$nohup /usr/local/elasticsearch-2.2.0/bin/elasticsearch &

查看啓動是否成功

# netstat -tunlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      950/sshd            

tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1027/master         

tcp        0      0 ::ffff:10.0.90.24:9300      :::*                        LISTEN      2428/java           

tcp        0      0 :::22                       :::*                        LISTEN      950/sshd            

tcp        0      0 ::1:25                      :::*                        LISTEN      1027/master         

tcp        0      0 ::ffff:10.0.90.24:9200      :::*                        LISTEN      2428/java     

在瀏覽器啓動查看

http://10.0.90.24:9200/  --會顯示以下:

{

  "name" : "node-1",

  "cluster_name" : "es_cluster",

  "version" : {

    "number" : "2.2.0",

    "build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",

    "build_timestamp" : "2016-01-27T13:32:39Z",

    "build_snapshot" : false,

    "lucene_version" : "5.4.1"

  },

  "tagline" : "You Know, for Search"

}


返回的信息展現了配置的cluster_name和name,以及安裝的ES的版本等信息。

注:剛剛安裝的head插件,它是一個用瀏覽器跟ES集×××互的插件,能夠查看集羣狀態、集羣的doc內容、執行搜索和普通的Rest請求等。

如今也可使用它打開http://ip:9200/_plugin/head頁面來查看ES集羣狀態:

http://10.0.90.24:9200/_plugin/head/

wKiom1cDMAXy5vXaAAB6RlXOYos811.png

能夠看到,如今的ES集羣中沒有index,也沒有type,所以這兩條是空的。

三、安裝kibana

#tar xf kibana-4.4.0-linux-x64.tar.gz -C /usr/local/

#cd /usr/local/

#mv  kibana-4.4.0-linux-x64/ kibana

爲kibana提供SysV形式的啓動腳本

#vi /etc/init.d/kibana

#!/bin/bash

### BEGIN INIT INFO

# Provides:          kibana

# Default-Start:     2 3 4 5

# Default-Stop:      0 1 6

# Short-Description: Runs kibana daemon

# Description: Runs the kibana daemon as a non-root user

### END INIT INFO

# Process name

NAME=kibana

DESC="Kibana4"

PROG="/etc/init.d/kibana"

# Configure location of Kibana bin

KIBANA_BIN=/usr/local/kibana/bin

# PID Info

PID_FOLDER=/var/run/kibana/

PID_FILE=/var/run/kibana/$NAME.pid

LOCK_FILE=/var/lock/subsys/$NAME

PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN

DAEMON=$KIBANA_BIN/$NAME

# Configure User to run daemon process

DAEMON_USER=root

# Configure logging location

KIBANA_LOG=/var/log/kibana.log

# Begin Script

RETVAL=0


if [ `id -u` -ne 0 ]; then

        echo "You need root privileges to run this script"

        exit 1

fi


# Function library

. /etc/init.d/functions

 

start() {

        echo -n "Starting $DESC : "


pid=`pidofproc -p $PID_FILE kibana`

        if [ -n "$pid" ] ; then

                echo "Already running."

                exit 0

        else

        # Start Daemon

if [ ! -d "$PID_FOLDER" ] ; then

                        mkdir $PID_FOLDER

                fi

daemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 &

                sleep 2

                pidofproc node > $PID_FILE

                RETVAL=$?

                [[ $? -eq 0 ]] && success || failure

echo

                [ $RETVAL = 0 ] && touch $LOCK_FILE

                return $RETVAL

        fi

}


reload()

{

    echo "Reload command is not implemented for this service."

    return $RETVAL

}


stop() {

        echo -n "Stopping $DESC : "

        killproc -p $PID_FILE $DAEMON

        RETVAL=$?

echo

        [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE

}

 

case "$1" in

  start)

        start

;;

  stop)

        stop

        ;;

  status)

        status -p $PID_FILE $DAEMON

        RETVAL=$?

        ;;

  restart)

        stop

        start

        ;;

  reload)

reload

;;

  *)

# Invalid Arguments, print the following message.

        echo "Usage: $0 {start|stop|status|restart}" >&2

exit 2

        ;;

esac

添加執行權限

#chmod +x /etc/init.d/kibana

啓動kibana

#service kibana start

Starting Kibana4 : [  OK  ]

查看是否啓動成功,以下表示啓動成功

#netstat -tunlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      950/sshd            

tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1027/master         

tcp        0      0 0.0.0.0:5601                0.0.0.0:*                   LISTEN      2909/node   --kibana端口         

tcp        0      0 ::ffff:10.0.90.24:9300      :::*                        LISTEN      2428/java           

tcp        0      0 :::22                       :::*                        LISTEN      950/sshd            

tcp        0      0 ::1:25                      :::*                        LISTEN      1027/master         

tcp        0      0 ::ffff:10.0.90.24:9200      :::*                        LISTEN      2428/java           

設置開機自啓動

#chkconfig --add kibana

#chkconfig kibana on 

四、安裝Logstash

注:其實它就是一個收集器而已,咱們須要爲它指定Input和Output(固然Input和Output能夠爲多個)。

#tar xf logstash-2.2.0.tar.gz -C /usr/local/

#cd /usr/local/

#mv logstash-2.2.0 logstash

測試logstash

#/usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout {} }'  --你會發現輸入什麼內容,logstash按照某種格式輸出什麼內容

Settings: Default pipeline workers: 2

Logstash startup completed

hello world   ---輸入的內容

2016-04-01T09:05:35.818Z elk hello world

注:其中-e參數容許Logstash直接經過命令行接受設置。這點尤爲快速的幫助咱們反覆的測試配置是否正確而不用寫配置文件。使用CTRL-C命令能夠退出以前運行的Logstash。

使用-e參數在命令行中指定配置是很經常使用的方式,不過若是須要配置更多設置則須要很長的內容。這種狀況,咱們首先建立一個簡單的配置文件,而且指定logstash使用這個配置文件。

注:logstash 配置文件的例子:https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html

logstash配置文件是以json格式設置參數的配置文件位於/etc/logstash/conf.d目錄(rpm安裝的路徑)下配置包括三個部分輸入端、過濾器和輸出。

格式以下:

# This is a comment. You should use comments to describe

# parts of your configuration.

input {

  ...

}


filter {

  ...

}


output {

  ...

}

插件配置格式:

input {

  file {

    path => "/var/log/messages"

    type => "syslog"

  }


  file {

    path => "/var/log/apache/access.log"

    type => "apache"

  }

}

首先建立一個簡單的例子

#cd /usr/local/logstash/config

#cat logstash-simple.conf 

input { stdin { } }

output {

   stdout { codec => rubydebug }

}

執行:

先輸出一些內容,例如當前時間:

#echo "`date` hello world"

Fri Apr  1 17:07:17 CST 2016 hello world

#/usr/local/logstash/bin/logstash agent -f logstash-simple.conf    --執行

Settings: Default pipeline workers: 2

Logstash startup completed

Fri Apr  1 17:07:17 CST 2016 hello world     --將剛纔生成的時間信息粘貼到這裏,回車,就會看到以下信息:

{

       "message" => "Tue Jul 14 18:07:07 EDT 2015 hello World",

      "@version" => "1",

    "@timestamp" => "2016-04-01T09:08:19.809Z",

          "host" => "elk"

}

接下來在logstash的安裝目錄建立一個用於測試logstash使用elasticsearch做爲logstash的後端輸出的測試文件

logstash-es-test.conf該文件中定義了stdout和elasticsearch做爲output,這樣的「多重輸出」即保證輸出結果顯示到屏幕上,同時也輸出到elastisearch中。以下:

#cat logstash-es-test.conf 

input { stdin { } }

output {

   elasticsearch {hosts => "10.0.90.24" }

   stdout { codec=> rubydebug }

}

測試配置文件是否正確

/usr/local/logstash/bin/logstash --configtest -f logstash-es-test.conf 

Configuration OK

若是文件比較多也能夠這樣:

/usr/local/logstash/bin/logstash --configtest -f config/*.conf

執行:

/usr/local/logstash/bin/logstash agent -f logstash-es-test.conf 

Settings: Default pipeline workers: 2

Logstash startup completed

hello logstash       --輸入內容,回車

{

       "message" => "hello logstash",

      "@version" => "1",

    "@timestamp" => "2016-04-01T09:18:26.967Z",

          "host" => "elk"

}

Ctrl+c 結束執行!

咱們可使用curl命令發送請求來查看ES是否接收到了數據:

#curl 'http://10.0.90.24:9200/_search?pretty'

{

  "took" : 4,

  "timed_out" : false,

  "_shards" : {

    "total" : 6,

    "successful" : 6,

    "failed" : 0

  },

  "hits" : {

    "total" : 5,

    "max_score" : 1.0,

    "hits" : [ {

      "_index" : ".kibana",

      "_type" : "config",

      "_id" : "4.4.0",

      "_score" : 1.0,

      "_source" : {

        "buildNum" : 9689

      }

    }, {

      "_index" : "logstash-2016.04.01",

      "_type" : "logs",

      "_id" : "AVPRHddUspScKx_yDLKx",

      "_score" : 1.0,

      "_source" : {

        "message" : "hello logstash",

        "@version" : "1",

        "@timestamp" : "2016-04-01T09:18:26.967Z",

        "host" : "elk"

      }

      }]

    }

}

經過以上顯示信息,能夠看到ES已經收到了數據!說明能夠經過Elasticsearch和Logstash來收集日誌數據了。

五、修改kibana端口

#cd /usr/local/kibana/config

備份配置

#cp  kibana.yml kibana.yml_back

修改成以下:其餘默認不變

server.port: 80           --修改端口爲80,默認是5601

server.host: "10.0.90.24"

elasticsearch.url: "http://10.0.90.24:9200"    --ip爲server的ip地址

kibana.defaultAppId: "discover"

elasticsearch.requestTimeout: 300000

elasticsearch.shardTimeout: 0

重啓kibana

#service kibana restart


在瀏覽器訪問kibana:

http://10.0.90.24    --就能夠看到kibana的頁面了

登陸以後,首先配置一個索引,默認kibana的數據被指向Elasticsearch,使用默認的logstash-*的索引名稱,而且是基於時間(@timestamp)的,以下

wKiom1cDMLyC2Q7kAAE94wSkMXA179.png



點擊「Create」,看到以下界面說明索引建立完成。

wKiom1cDMQiT4zbxAAEOtp0Gzsw683.png

點擊「Discover」,能夠搜索和瀏覽Elasticsearch中的數據,默認搜索的是最近15分鐘的數據,能夠自定義選擇時間。

到此,說明你的ELK平臺安裝部署完成。

六、配置logstash做爲Indexer

將logstash配置爲索引器,並將logstash的日誌數據存儲到Elasticsearch,本範例主要是索引本地系統日誌

#cd /usr/local/logstash/config

#cat logstash-indexer.conf

input {

  file {

     type => "syslog"

     path => ["/var/log/messages", "/var/log/secure" ]

  }

  syslog {

     type => "syslog"

     port => "5544"

  }

}

output {

  elasticsearch { hosts => "10.0.90.24" }

  stdout { codec => rubydebug }

}

測試是否有語法錯誤:

#/usr/local/logstash/bin/logstash --configtest -f logstash-indexer.conf 

Configuration OK

啓動

nohup /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-indexer.conf &

查看端口:

# netstat -tunlp

wKioL1cDMmfxRqxBAAA0D-Kj-EE117.png而後到kibana界面刷新下,就能夠看到日誌信息了。

測試echo一條日誌信息到/var/log/messages,而後再經過kibana界面查看

#echo "`date` This is a test for logstash for indexer" >> /var/log/messages

以下圖:

wKiom1cDMnHzjGMDAAEgrxMnk0A176.png

測試從一臺服務器(ip爲10.0.18.12)登陸到10.0.90.24 

#ssh root@10.0.90.24

The authenticity of host '10.0.90.24 (10.0.90.24)' can't be established.

RSA key fingerprint is 4b:97:0a:97:e8:cf:a5:39:49:6c:65:8e:32:79:64:c8.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.0.90.24' (RSA) to the list of known hosts.

root@10.0.90.24's password:      --輸入10.0.90.24 服務器的root密碼

Last login: Fri Apr  1 17:31:32 2016 from 10.0.90.8

而後查看kibana,看是否蒐集到了日誌

wKioL1cDM1vgR_btAABtlN0MDy4567.png以上表示蒐集日誌成功。

3、在客戶端安裝配置(即須要收集日誌的服務器)

注:客戶端IP是10.0.90.25,安裝配置了httpd服務

一、安裝jdk

# rpm -ivh jdk-8u77-linux-x64.rpm 

Preparing...                ########################################### [100%]

   1:jdk1.8.0_77            ########################################### [100%]

Unpacking JAR files...

        tools.jar...

        plugin.jar...

        javaws.jar...

        deploy.jar...

        rt.jar...

        jsse.jar...

        charsets.jar...

        localedata.jar...

        jfxrt.jar...

測試jdk安裝是否成功

#java -version

java version "1.8.0_77"

Java(TM) SE Runtime Environment (build 1.8.0_77-b03)

Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

表示安裝OK

二、安裝logstash

#tar xf logstash-2.2.0.tar.gz -C /usr/local/

#cd /usr/local/

#mv logstash-2.2.0 logstash

建立配置文件

#cd logstash

#mkdir config

#vim logstash-http.conf    --內容以下

input {

  file {

        path => "/var/log/httpd/access_log"

        codec => "json"

}

}

output {

 elasticsearch {

        hosts => ["10.0.90.24:9200"]               --ELK服務端ip地址和端口

        index => "http-access-log-%{+YYYY.MM.dd.HH}"    --index名稱,自定義的

        workers =>5

        template_overwrite => true

}

}

表示將httpd的access日誌output到ES上,並經過kibana顯示日誌。

檢測配置文件是否有語法錯誤!OK表示沒有錯誤

#/usr/local/logstash/bin/logstash --configtest -f logstash-http.conf

Configuration OK

三、啓動logstash

#nohup /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-http.conf &

查看進程,以下圖

wKioL1cDM9DR_r-IAAA3mSmKPpw924.png到kibana界面建立一個新的index

點擊「Settings」,填寫以下內容

wKiom1cDOMXCPgItAAE3o-s2hTE362.png

而後點擊「Create」,出現以下圖界面,表示index建立成功

wKiom1cDOR6iGaVyAACwqUNoMdc367.png

在客戶端測試日誌output到ES狀況:在access_log日誌中echo一行測試日誌,以下:

#echo "This is a test line for http log" >> /var/log/httpd/access_log

而後在瀏覽器訪問10.0.90.25服務器的http服務,http://10.0.90.25

在kibana界面查看,能夠看到以下圖

wKiom1cDM1-SY-1pAAEfiKlGXTg654.png

參考地址:

http://baidu.blog.51cto.com/71938/1676798

注:ELK剛開始研究,若有不足之處請多多指出,謝謝!

ELKstack中文指南:http://kibana.logstash.es/content/logstash/get_start/index.html

相關文章
相關標籤/搜索