ELK7.3實戰安裝配置文檔

 

總體架構html

 
一:環境準備  
  一、環境規劃準備
1 192.168.43.16 jdk,elasticsearch-master ,logstash,kibana   2 192.168.43.17 jdk,elasticsearch-node1   3 192.168.43.18 jdk,elasticsearch-node2   4 192.168.43.19 liunx ,filebeat
  二、安裝JDK
  elasticseach以及logstash的運行都須要jdk環境,在三臺機器上分別安裝jdk-12.0.2   
 1 #解壓
 2 tar -zxvf jdk-12.0.2_linux-x64_bin.tar.gz -C /usr/
 3 
 4 #設置環境變量
 5 vim /etc/profile  6 export JAVA_HOME=/usr/jdk-12.0.2/
 7 export JRE_HOME=$JAVA_HOME/jre  8 export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH  9 export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH 10 
11 #使環境變量生效
12 source /etc/profile
  三、操做系統調優設置
 1 # 修改系統文件
 2 vim /etc/security/limits.conf  3 
 4 #增長的內容
 5 * soft nofile 65536
 6 * hard nofile 65536
 7 * soft nproc 2048
 8 * hard nproc 4096
 9 
10 #修改系統文件
11 vim /etc/security/limits.d/20-nproc.conf 12  
13 #調整成如下配置
14 *          soft    nproc     4096
15 root soft nproc unlimited 16 
17 vim /etc/sysctl.conf 18 #在最後追加
19 vm.max_map_count=262144
20 fs.file-max=655360
21 
22 #使用 sysctl -p 查看修改結果
23 sysctl -p
  四、配置hosts
1 vim /etc/hosts 2 192.168.43.16 elk-master-node 3 192.168.43.17 elk-data-node1 4 192.168.43.18 elk-data-node2
  五、關閉防火牆以及SELINUX
1 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 2 setenforce 0 3 systemctl stop firewalld 4 systemctl disable firewalld
  六、建立elk用戶
1 groupadd elk 2 useradd ‐g elk elk
  七、安裝目錄規劃
1 mkdir -p /home/app/elk 2 chown -R elk:elk /home/app/elk
  八、下載軟件包(elaticsearch-node上只安裝elaticsearch)
    將壓縮包所有解壓至/home/app/elk目錄
1 wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz 2 wget https://artifacts.elastic.co/downloads/logstash/logstash-7.3.2.tar.gz 3 wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz 4 tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz -C /home/app/elk && \ 5 tar -zxvf logstash-7.3.2.tar.gz  -C /home/app/elk && \ 6 tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C /home/app/elk

2、安裝elasticsearchjava

  一、配置elasticsearch(切換至elk用戶)node

  建立Elasticsearch數據目錄​ mkdir /home/app/elk/elasticsearch-7.3.2/data -plinux

  建立Elasticsearch日誌目錄 mkdir /home/app/elk/elasticsearch-7.3.2/logs -pnginx

  主節點配置:vim /home/app/elk/elasticsearch-7.3.2/config/elasticsearch.ymljson

 1 # 集羣名稱
 2 cluster.name: es  3 # 節點名稱
 4 node.name: es-master  5 # 存放數據目錄,先建立該目錄
 6 path.data: /home/app/elk/elasticsearch-7.3.2/data  7 # 存放日誌目錄,先建立該目錄
 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs  9 # 節點IP
10 network.host: 192.168.43.16
11 # tcp端口
12 transport.tcp.port: 9300
13 # http端口
14 http.port: 9200
15 # 種子節點列表,主節點的IP地址必須在seed_hosts中
16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格節點列表,如有多個主節點,則主節點進行對應的配置
18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主節點相關配置
20  
21 # 是否容許做爲主節點
22 node.master: true 23 # 是否保存數據
24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28  
29 # 跨域
30 http.cors.enabled: true 31 http.cors.allow-origin: "*"

  192.168.43.17數據節點從配置:vim /home/app/elk/elasticsearch-7.3.2/config/elasticsearch.ymlvim

 1 # 集羣名稱
 2 cluster.name: es  3 # 節點名稱
 4 node.name: es-data1  5 # 存放數據目錄,先建立該目錄
 6 path.data: /home/app/elk/elasticsearch-7.3.2/data  7 # 存放日誌目錄,先建立該目錄
 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs  9 # 節點IP
10 network.host: 192.168.43.17
11 # tcp端口
12 transport.tcp.port: 9300
13 # http端口
14 http.port: 9200
15 # 種子節點列表,主節點的IP地址必須在seed_hosts中
16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格節點列表,如有多個主節點,則主節點進行對應的配置
18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主節點相關配置
20  
21 # 是否容許做爲主節點
22 node.master: false 23 # 是否保存數據
24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28  
29 # 跨域
30 http.cors.enabled: true 31 http.cors.allow-origin: "*"
192.168.43.18數據節點從配置:vim /home/app/elk/elasticsearch-7.3.2/config/elasticsearch.yml
 1 # 集羣名稱
 2 cluster.name: es  3 # 節點名稱
 4 node.name: es-data2  5 # 存放數據目錄,先建立該目錄
 6 path.data: /home/app/elk/elasticsearch-7.3.2/data  7 # 存放日誌目錄,先建立該目錄
 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs  9 # 節點IP
10 network.host: 192.168.43.18
11 # tcp端口
12 transport.tcp.port: 9300
13 # http端口
14 http.port: 9200
15 # 種子節點列表,主節點的IP地址必須在seed_hosts中
16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格節點列表,如有多個主節點,則主節點進行對應的配置
18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主節點相關配置
20  
21 # 是否容許做爲主節點
22 node.master: false 23 # 是否保存數據
24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28  
29 # 跨域
30 http.cors.enabled: true 31 http.cors.allow-origin: "*"

  二、啓動elasticserach跨域

1 sh /home/app/elk/elasticsearch-7.3.2/bin/elasticsearch -druby

  三、監控檢查服務器

 1 curl -X GET 'http://192.168.43.16:9200/_cluster/health?pretty' 
 2 [root@localhost elk]# curl -X GET 'http://192.168.43.16:9200/_cluster/health?pretty' 
 3 {  4   "cluster_name" : "es",  5   "status" : "green",  6   "timed_out" : false,  7   "number_of_nodes" : 3,  8   "number_of_data_nodes" : 3,  9   "active_primary_shards" : 5, 10   "active_shards" : 10, 11   "relocating_shards" : 0, 12   "initializing_shards" : 0, 13   "unassigned_shards" : 0, 14   "delayed_unassigned_shards" : 0, 15   "number_of_pending_tasks" : 0, 16   "number_of_in_flight_fetch" : 0, 17   "task_max_waiting_in_queue_millis" : 0, 18   "active_shards_percent_as_number" : 100.0
19 } 20 #status=green表示服務正常

 3、安裝kibana

  一、修改配置文件

 1 cd /home/app/elk/kibana-7.3.2-linux-x86_64/config  2 vim kibana.yml  3 # 配置kibana的端口
 4 server.port: 5601
 5 # 配置監聽ip
 6 server.host: "192.168.43.16"
 7 # 配置es服務器的ip,若是是集羣則配置該集羣中主節點的ip
 8 elasticsearch.hosts: "http://192.168.43.16:9200/"
 9 # 配置kibana的日誌文件路徑,否則默認是messages裏記錄日誌
10 logging.dest:/home/app/elk/kibana-7.3.2-linux-x86_64/logs/kibana.log

  二、啓動kibana

1 nohup /home/app/elk/kibana-7.3.2-linux-x86_64/bin/kibana &

 3、安裝filebeat(192.168.43.19上事先跑了jumpserver服務)

  本次實驗咱們在192.168.43.19上安裝filebeat單獨對nginx的訪問日誌和錯誤日誌進行採集,網上有關於發送json格式的配置,在此爲了練習grok,直接發送原格式進行配置

  一、下載filebeat

1 wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-linux-x86_64.tar.gz 2 mkdir -p /opt/software 3 tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz -C /opt/software

   二、配置filebeat.yml

 1 vim /opt/software/filebeat-7.3.2/filebeat.yml  2 #=========================== Filebeat inputs =============================
 3 filebeat.inputs:  4 - type: log  5  paths:  6    - /var/log/nginx/access.log  7  fields:  8    log_source: nginx-access  9 - type: log 10  paths: 11    - /var/log/nginx/error.log 12  fields: 13    log_source: nginx-error 14 #============================== Dashboards =====================================
15 setup.dashboards.enabled: false 16 #============================== Kibana =====================================
17 #添加libana儀表盤
18 setup.kibana: 19   host: "192.168.43.16:5601"
20 #----------------------------- Logstash output --------------------------------
21 output.logstash: 22   # The Logstash hosts
23   hosts: ["192.168.43.16:5044"]

   三、啓動filebeat

1 cd /opt/software/filebeat-7.3.2
2 nohup ./filebeat -c filebeat.yml &

 4、安裝logstash

  一、建立logstash.conf文件

 1 vim /home/app/elk/logstash-7.3.2/config/logstash.conf  2 input {  3  beats {  4     port => 5044
 5  }  6 }  7 filter {  8   if [fields][log_source]=="nginx-access"{  9  grok { 10       match => { 11         "message" => '%{IP:clientip}\s*%{DATA}\s*%{DATA}\s*\[%{HTTPDATE:requesttime}\]\s*"%{WORD:requesttype}.*?"\s*%{NUMBER:status:int}\s*%{NUMBER:bytes_read:int}\s*"%{DATA:requesturl}"\s*%{QS:ua}'
12  } 13       overwrite => ["message"] 14  } 15  } 16   if [fields][log_source]=="nginx-error"{ 17  grok { 18       match => { 19         "message" => '(?<time>.*?)\s*\[%{LOGLEVEL:loglevel}\]\s*%{DATA}:\s*%{DATA:errorinfo},\s*%{WORD}:\s*%{IP:clientip},\s*%{WORD}:%{DATA:server},\s*%{WORD}:\s*%{QS:request},\s*%{WORD}:\s*%{QS:upstream},\s*%{WORD}:\s*"%{IP:hostip}",\s*%{WORD}:\s*%{QS:referrer}'
20  } 21       overwrite => ["message"] 22  } 23  } 24 } 25 output { 26   if [fields][log_source]=="nginx-access"{ 27  elasticsearch { 28       hosts => ["http://192.168.43.16:9200"] 29       action => "index"
30       index => "nginx-access-%{+YYYY.MM.dd}"
31  } 32  } 33   if [fields][log_source]=="nginx-error"{ 34  elasticsearch { 35       hosts => ["http://192.168.43.16:9200"] 36       action => "index"
37       index => "nginx-error-%{+YYYY.MM.dd}"
38  } 39  } 40   stdout { codec => rubydebug } 41 }

   二、啓動logstash

1 /home/app/elk/logstash-7.3.2/bin/logstash -f /home/app/elk/logstash-7.3.2/config/logstash.conf

 6、登錄kibana平臺

  分別點擊管理--》索引管理,這時候就能看到nginx的訪問日誌和錯誤日誌的數據了

  接下來建立索引,分別對訪問日誌和錯誤日誌創建索引,創建完以後點擊discover,就能看到日誌數據了

  nginx-access

   nginx-error

 

 

參考文檔:

http://blog.leanote.com/post/tanyulonglong@126.com/%E5%AE%98%E6%96%B9%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3-2

https://elkguide.elasticsearch.cn/logstash/plugins/filter/mutate.html

 

原文出處:https://www.cnblogs.com/jiaosf/p/11604274.html

相關文章
相關標籤/搜索