Elasticsearch+Kibana+Filebeat簡單架構

平臺centos7.4_x86_64java


ELK由Elasticsearch、Logstash和Kibana三部分組件組成;
Elasticsearch是個開源分佈式搜索引擎,特色:分佈式,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
Logstash是一個徹底開源的工具,可對日誌進行收集、分析,並將過濾後的數據轉給Elasticsearch使用
kibana 是一個開源和免費的工具,它能夠爲 Logstash 和 ElasticSearch 提供友好的web可視化界面,幫助您彙總、分析和搜索重要數據日誌。
beats是開源的輕量級數據傳輸組件,面向簡單明確的數據傳輸場景,可將數據傳輸給Logstash 和 ElasticSearch,安裝在採集端
X-Pack是Elastcsearch的擴展插件,包括基於用戶的安全管理、集羣監控告警、數據報表導出、圖探索,需分別在Elasticsearch和kibana節點安裝,X-Pack是付費的。
node

經常使用於對應用產生的日誌進行收集、整理、展現。nginx

若想快速上手嘗試下ELK,本文能幫到你
最簡單的ELK架構  filebeat --> elasticsearch --> kibana
ELK服務器上安裝elasticsearch+kibna,其餘產生日誌的服務器上安裝filebeat
web

1,elasticsearch安裝centos

yum install -y java-1.8.0安全

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.1.rpm
rpm -ivh elasticsearch-6.0.1.rpm
服務器

vi /etc/elasticsearch/elasticsearch.yml
cluster.name: my-app
node.name: elk-1  
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
restful

systemctl daemon-reload
systemctl enable elasticsearch
systemctl restart elasticsearch
systemctl status elasticsearch
架構

http://10.31.44.167:9200 若能看到文字最後'for search, you know',即啓動成功app

2,kibana安裝

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.1-x86_64.rpm
rpm -ivh kibana-6.0.1-x86_64.rpm

vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"
kibana.index: ".kibana"

systemctl enable kibana
systemctl start kibana
systemctl status kibana

3,客戶端    filebeat安裝

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.1-x86_64.rpm
rpm -ivh filebeat-6.0.1-x86_64.rpm

vi /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
tags: [webA]
setup.kibana:
  host: "10.31.44.167:5601"
output.elasticsearch:
  hosts: ["10.31.44.167:9200"]


systemctl enable filebeat
systemctl start filebeat
systemctl status filebeat

管理地址:http://10.31.44.167:5601

如圖,將默認的logstash-*改成filebeat-*,create

選擇左邊欄‘discovery’便可看到收集的日誌展現

完整的ELK是:filebeat --> logstash --> elasticsearch --> kibana

filebeat配置
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
tags: [webA]
output.logstash:  
  hosts: ["10.31.44.167:5044"]


logstash配置
input {  
    beats {  
        port => 5044  
    }  
}  
output{  
    elasticsearch {  
        hosts => ["10.31.44.167:9200"]  
        index => "test"  
    }  
}  

隨後需在kibana上index pattern裏添加logstash裏配置的index名稱beat,查看到對應的日誌記錄即success

爲保證安全可將filebeat和logstash之間的傳輸加密,方法以下:

用openssl分別在filebeat和logstash上的當前目錄下生成證書
openssl req -subj '/CN=10.31.44.167/' -days $((100*365)) -batch -nodes -newkey rsa:2048 -keyout ./filebeat.key -out ./filebeat.crt
將生成的證書放到/etc/pki/tls/certs/
最後將crt證書相互放到filebeat和logstash上

filebeat配置
tls:
     ## logstash server 的自簽名證書。
     certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
     certificate: "/etc/pki/tls/certs/filebeat.crt"
     certificate_key: "/etc/pki/tls/private/filebeat.key"

logstash配置

input {   beats {     port => 5044     ssl => true     ssl_certificate_authorities => ["/etc/pki/tls/certs/filebeat.crt"]     ssl_certificate => "/etc/pki/tls/certs/logstash.crt"     ssl_key => "/etc/pki/tls/private/logstash.key"     ssl_verify_mode => "force_peer"   } }  

相關文章
相關標籤/搜索