部署使用elk

1.配置8臺虛擬機 es[1:5],kibana,logstash,web   ip 192.168.1.61-68html

2.開始配置ip和主機名java

3.用ansible部署elasticsearch,並能訪問網頁,如下是ansible部署的yml代碼node

 1 ---
 2 - hosts: es  3  remote_user: root  4  tasks:  5     - copy:  6  src: local.repo  7         dest: /etc/yum.repos.d/local.repo  8  owner: root  9  group: root 10         mode: 0644
11     - name: install elasticsearch 12       yum: 13         name: java-1.8.0-openjdk,elasticsearch 14  state: installed 15     - template: 16  src: elasticsearch.yml 17         dest: /etc/elasticsearch/elasticsearch.yml 18  owner: root 19  group: root 20         mode: 0644
21  notify: reload elasticsearch 22  tags: esconf 23     - service: 24  name: elasticsearch 25  enabled: yes 26  handlers: 27     - name: reload elasticsearch 28  service: 29  name: elasticsearch 30         state: restarted
View Code

4.如下是當前要改的 hostsnginx

192.168.1.61 es1 192.168.1.62 es2 192.168.1.63 es3 192.168.1.64 es4 192.168.1.65 es5 192.168.1.66 kibana 192.168.1.67 logstash

5.yum源有相應的軟件包和依賴包便可web

6.如下是當前要改的 elasticsearch.ymlapache

 1 # ======================== Elasticsearch Configuration =========================
 2 #  3 # NOTE: Elasticsearch comes with reasonable defaults for most settings.  4 #       Before you set out to tweak and tune the configuration, make sure you  5 # understand what are you trying to accomplish and the consequences.  6 #  7 # The primary way of configuring a node is via this file. This template lists  8 # the most important settings you may want to configure for a production cluster.  9 # 10 # Please see the documentation for further information on configuration options: 11 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
12 # 13 # ---------------------------------- Cluster -----------------------------------
14 # 15 # Use a descriptive name for your cluster: 16 # 17 cluster.name: nsd1810 18 # 19 # ------------------------------------ Node ------------------------------------
20 # 21 # Use a descriptive name for the node: 22 # 23 node.name: {{ansible_hostname}} 24 # 25 # Add custom attributes to the node: 26 # 27 # node.rack: r1 28 # 29 # ----------------------------------- Paths ------------------------------------
30 # 31 # Path to directory where to store the data (separate multiple locations by comma): 32 # 33 # path.data: /path/to/data 34 # 35 # Path to log files: 36 # 37 # path.logs: /path/to/logs 38 # 39 # ----------------------------------- Memory -----------------------------------
40 # 41 # Lock the memory on startup: 42 # 43 # bootstrap.mlockall: true
44 # 45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory 46 # available on the system and that the owner of the process is allowed to use this limit. 47 # 48 # Elasticsearch performs poorly when the system is swapping the memory. 49 # 50 # ---------------------------------- Network -----------------------------------
51 # 52 # Set the bind address to a specific IP (IPv4 or IPv6): 53 # 54 network.host: 0.0.0.0
55 # 56 # Set a custom port for HTTP: 57 # 58 # http.port: 9200
59 # 60 # For more information, see the documentation at: 61 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
62 # 63 # --------------------------------- Discovery ----------------------------------
64 # 65 # Pass an initial list of hosts to perform discovery when new node is started: 66 # The default list of hosts is ["127.0.0.1", "[::1]"] 67 # 68 discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3"] 69 # 70 # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): 71 # 72 # discovery.zen.minimum_master_nodes: 3
73 # 74 # For more information, see the documentation at: 75 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
76 # 77 # ---------------------------------- Gateway -----------------------------------
78 # 79 # Block initial recovery after a full cluster restart until N nodes are started: 80 # 81 # gateway.recover_after_nodes: 3
82 # 83 # For more information, see the documentation at: 84 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
85 # 86 # ---------------------------------- Various -----------------------------------
87 # 88 # Disable starting multiple nodes on a single system: 89 # 90 # node.max_local_storage_nodes: 1
91 # 92 # Require explicit names when deleting indices: 93 # 94 # action.destructive_requires_name: true
View Code

7.elasticsearch搭建完成,能夠用http://192.168.1.61:9200 訪問檢驗json

8.部署插件bootstrap

插件裝在哪一臺機器上,只能在哪臺機器上使用(這裏安裝在es5機器上面)vim

1)使用遠程 uri 路徑能夠直接安裝瀏覽器

es5 ~]# cd /usr/share/elasticsearch/bin
es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-head-master.zip        //安裝head插件

es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip        //安裝kopf插件

es5 bin]# [root@es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/bigdesk-master.zip

//安裝bigdesk插件    

es5 bin]# ./plugin list        //查看安裝的插件

Installed plugins in /usr/share/elasticsearch/plugins:

  1. - head
  2. - kopf
  3. - bigdesk

2)訪問head插件

  1. [root@room9pc01 ~]# firefox http://192.168.1.65:9200/_plugin/head
  2. )訪問kopf插件 

    1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/kopf

4)訪問bigdesk插件

  1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/bigdesk

 

9:安裝kibana

1)在另外一臺主機,配置ip爲192.168.1.66,配置yum源,更改主機名

2)安裝kibana

  1. [root@kibana ~]# yum -y install kibana
  2. [root@kibana ~]# rpm -qc kibana
  3. /opt/kibana/config/kibana.yml
  4. [root@kibana ~]# vim /opt/kibana/config/kibana.yml
  5. 2 server.port: 5601        
  6. //若把端口改成80,能夠成功啓動kibana,但ss時沒有端口,沒有監聽80端口,服務裏面寫死了,不能用80端口,只能是5601這個端口
  7. 5 server.host: "0.0.0.0"        //服務器監聽地址
  8. 15 elasticsearch.url: http://192.168.1.61:9200    
  9. //聲明地址,從哪裏查,集羣裏面隨便選一個
  10. 23 kibana.index: ".kibana"    //kibana本身建立的索引
  11. 26 kibana.defaultAppId: "discover"    //打開kibana頁面時,默認打開的頁面discover
  12. 53 elasticsearch.pingTimeout: 1500    //ping檢測超時時間
  13. 57 elasticsearch.requestTimeout: 30000    //請求超時
  14. 64 elasticsearch.startupTimeout: 5000    //啓動超時
  15. [root@kibana ~]# systemctl restart kibana
  16. [root@kibana ~]# systemctl enable kibana
  17. Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service.
  18. [root@kibana ~]# ss -antup | grep 5601 //查看監聽端口

3)瀏覽器訪問kibana,

  1. [root@kibana ~]# firefox 192.168.1.66:5601

4)點擊Status,查看是否安裝成功,所有是綠色的對鉤,說明安裝成功

5)用head插件訪問會有.kibana的索引信息,如圖所示:

[root@es5 ~]# firefox http://192.168.1.65:9200/_plugin/head/   #插件安裝再es5上
 

 

 10.安裝jdk 和 logstash
  1. [root@logstash ~]# yum -y install java-1.8.0-openjdk
  2. [root@logstash ~]# yum -y install logstash
  3. [root@logstash ~]# java -version
11.本身配置logstash.conf 文件,參考代碼以下,input,output,filter 能夠參考elastic官網https://www.elastic.co/guide/en/logstash/current/index.html
      yes的爲必須選項,下面有id和example(與id在一塊兒的地方)
      注意這個代碼包含不少測試輸入輸出還有調用宏(
  1. root@logstash ~]# cd /opt/logstash/vendor/bundle/ \
  2. jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
  3. [root@logstash ~]# vim grok-patterns //查找COMBINEDAPACHELOG
  4. COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
)查找宏路徑,寫入數據到/tmp/a.log,以及echo > /dev/tcp/192.168.1.67 /8888 ,也能夠用syslog部分,在/etc/rsyslog.conf在加上
  1. local0.info @192.168.1.67:514
  2. //寫一個@或兩個@@均可以,一個@表明udp,兩個@@表明tcp

logger -p local0.info -t nds "001 elk" 而後上述操做都能被logstash,收集並處理,看到分析結果。調用宏用的是grok(由於寫的正則有點複雜和頭疼就用宏定義了)

 1 input{  2     stdin{ codec => "json"}  3  beats{  4       port => 5044
 5  }  6 file {  7     path => ["/tmp/a.log","/tmp/b.log"]  8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11  } 12  tcp { 13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16  } 17  udp { 18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 } 22 
23 syslog { 24            type => "syslog"
25 } 26 
27 } 28 filter{ 29  grok { 30   match => ["message", "%{COMBINEDAPACHELOG}"] 31  }} 32 
33 output{ 34  stdout{ 35     codec => "rubydebug"
36  } 37  elasticsearch { 38     hosts => ["es1", "es2", "es3"] 39     index => "weblog"
40     flush_size => 2000
41     idle_flush_time => 10
42 } 43 }
View Code

12.

步驟二:安裝Apache服務,用filebeat收集Apache服務器的日誌,存入elasticsearch

1)在以前安裝了Apache的主機上面安裝filebeat

  1. [root@web~]# yum -y install filebeat
  2. [root@web~]# vim/etc/filebeat/filebeat.yml
  3. paths:
  4.     - /var/log/httpd/access_log //日誌的路徑,短橫線加空格表明yml格式
  5. document_type: apachelog //文檔類型
  6. elasticsearch:        //加上註釋
  7. hosts: ["localhost:9200"]                //加上註釋
  8. logstash:                    //去掉註釋
  9. hosts: ["192.168.1.67:5044"]     //去掉註釋,logstash那臺主機的ip
  10. [root@web ~]# systemctl start filebeat

13.而後在/etc/logstash.conf中寫入beats port相關配置(filebeat是給logstash的小插件,使得各服務器不用裝logstash就能夠自動把數據發送給logstash服務器)

14都配置好後,就能夠在logstash執行/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf 

  來進行解析數據了

而且能夠用netstat -antup | grep 5044 查到 兩個5044的端口 (用netstat -lntup | grep 5044只有一個)

15.訪問web服務器後,logstash能收集到數據,而elastic的網頁能夠看到weblog,在.kibana的網頁輸入weblog能夠看到訪問的條形圖

 weblog圖相似下面,名字不同

 

.kibana圖以下

 

不能畫出餅圖,數據太少,訪問網頁要多刷新幾回,數據太少,可能看不到圖片。 

 

上述的logstash.conf文件是用來解析httpd的log日誌的若是是nginx或者是其餘服務就要調用他的宏了,能夠if來判斷,如如下代碼,提供一些思路

 1 input{  2     stdin{ codec => "json"}  3  beats{  4       port => 5044
 5  }  6 file {  7     path => ["/tmp/a.log","/tmp/b.log"]  8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11  } 12  tcp { 13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16  } 17  udp { 18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 } 22 
23 syslog { 24            type => "syslog"
25 } 26 
27 } 28 filter{ 29  if [type] == "httplog"{ 30  grok { 31   match => ["message", "%{COMBINEDAPACHELOG}"] 32  }} 33 } 34 
35 output{ 36  stdout{ 37     codec => "rubydebug"
38  } 39 if [type] == "httplog"{ 40  elasticsearch { 41     hosts => ["es1", "es2", "es3"] 42     index => "weblog"
43     flush_size => 2000
44     idle_flush_time => 10
45 } 46 }}
View Code

 elk 總體思路是 客戶經過訪問web,而後web服務器的filebeat把數據發送給logstash,logstash解析後發送各elasticsearch檢索存儲,kibana上能夠從elastic提取數據並有用web和圖形展現出來

相關文章
相關標籤/搜索