1.前置工做java
1.虛擬機環境簡介 node
Linux版本:Linux localhost.localdomain 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linuxlinux
ip地址:192.168.1.4(虛擬機Nat配置可參考個人CSDN博客https://blog.csdn.net/yanshaoshuai/article/details/97689891)redis
Java環境:java 12.0.2(java環境安裝能夠參考個人CSDN博客https://blog.csdn.net/yanshaoshuai/article/details/87868286)vim
2.用戶及權限配置瀏覽器
因爲ELK產品不能以root用戶運行,因此要先建立一個普通用戶,而且最低要給予該用戶你運行程序目錄的執行權限,以及配置文件的修改權限和運行程序中產生文件的讀寫權限等。ruby
#建立用戶和組
[root@localhost gz]# groupadd es_group [root@localhost gz]# useradd es_user [root@localhost gz]# passwd es_user Changing password for user es_user. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully.
#把用戶添加到組 [root@localhost gz]# usermod -g es_group es_user
#更改目錄全部者爲新用戶
[root@localhost es]# chown -R es_user:es_group /opt/es
2.Elasticsearch 7.2版本安裝配置服務器
下載連接:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz網絡
解壓:切換到前面建立的es_user用戶執行下面命令dom
[es_user@localhost es]$ tar -xzvf ./gz/elasticsearch-7.2.0-linux-x86_64.tar.gz -C .
切換到root用戶修改elasticsearch配置文件:
[root@localhost ~]# vim /opt/es/elasticsearch-7.2.0/config/elasticsearch.yml #配置文件內容 # Path to directory where to store the data (separate multiple locations by comma): # path.data: /opt/es/elasticsearch-7.2.0/data # # Path to log files: # path.logs: /opt/es/elasticsearch-7.2.0/logs # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 192.168.1.4 # # Set a custom port for HTTP: # http.port: 9200 # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["192.168.1.4"]
切換到es_user用戶啓動Elasticsearch:
./elasticsearch-7.2.0/bin/elasticsearch
啓動報錯及處理:
ES啓動三個報錯的處理
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max number of threads [3829] for user [elk] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
在root用戶下修改下面文件內容
最大文件打開數調整/etc/security/limits.conf
* - nofile 65536
最大打開進程數調整/etc/security/limits.d/20-nproc.conf
* - nproc 10240
內核參數調整 /etc/sysctl.conf
vm.max_map_count = 262144
修改完畢後再次啓動便可。
啓動成功測試:
[root@localhost ~]# curl 192.168.1.4:9200 { "name" : "localhost.localdomain", "cluster_name" : "elasticsearch", "cluster_uuid" : "0cwX-EgVR8W-61tlZV7cXg", "version" : { "number" : "7.2.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "508c38a", "build_date" : "2019-06-20T15:54:18.811730Z", "build_snapshot" : false, "lucene_version" : "8.0.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
後臺啓動加上 -d 參數便可
3.Kinaba 7.2版本安裝配置
下載連接:https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
解壓:切換到前面建立的es_user用戶執行下面命令
tar -xzvf ./gz/kibana-7.2.0-linux-x86_64.tar.gz -C ./
修改Kibana配置文件:
vim ./kibana-7.2.0-linux-x86_64/config/kibana.yml #配置文件內容 # Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "192.168.1.4" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://192.168.1.4:9200"]
防火牆對外開放5601端口:
[root@localhost ~]# firewall-cmd --zone=public --add-port=5601/tcp --permanent success [root@localhost ~]# firewall-cmd --reload success
啓動kibana:
./kibana-7.2.0-linux-x86_64/bin/kibana
遠程訪問kibana:
在瀏覽器輸入192.168.1.4:5601回車便可訪問到kibana
選擇Explore on my own點擊最下方箭頭展開kibana選項卡,而後選擇Dev Tools-->Console便可在kibana上操做ES了。
ES簡單操做:
# 獲取全部索引數據 GET _search { "query": { "match_all": {} } } # 查詢索引下全部數據 GET /shijiange/_doc/_search?q=* # 刪除索引 DELETE /shijiange # 添加索引數據(若無索引會建立索引) PUT /shijiange/_doc/1 { "name":"yanshaoshuai", "age":19 } # 覆蓋 PUT /shijiange/_doc/1 { "age":19 } # 修改 POST /shijiange/_doc/1/_update { "doc":{ "name":"yan1" } }
Console中輸入正確操做語句後點擊後面綠色按鈕便可執行該語句
4.Logstash7.2版本安裝配置
下載連接:https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
解壓:切換到es_user用戶下執行如下操做
tar -xzvf /opt/es/gz/logstash-7.2.0.tar.gz -C /opt/es
簡單啓動
編寫簡單配置文件:
# 標準輸入-->logstash-->標準輸出 的配置文件 vim /opt/es/logstash-7.2.0/config/logstash_std.conf input{ stdin{} } output{ stdout{ codec=>rubydebug } }
以logstash_std.conf配置文件啓動:
/opt/es/logstash-7.2.0/bin/logstash -f ./logstash_std.conf
啓動成功後光標停滯在啓動日誌末尾,此時輸入的內容會被logstash接收而且將處理後的數據打印在標準輸出上:
讀取日誌文件並輸出到Elasticsearch
編寫配置文件:
# /opt/es/example.log-->logstash-->elasticsearch-->kibana vim ./logstash_elk.conf #logstash_elk.conf內容 input { file { path => "/opt/es/example.log" } } output { elasticsearch { hosts => ["http://192.168.1.4:9200"] } }
以logstash_elk.conf啓動(在先啓動elasticsearch的狀況下):
/opt/es/logstash-7.2.0/bin/logstash -f ./logstash_elk.conf
啓動成功後測試以下:
echo 'test the elk config' >> /opt/es/example.log
等待數秒後在Kibana上查詢全部數據:
能夠看到咱們的ELK配置是成功的
可是Logstash做爲日誌收集軟件在每個機器上都啓動一份的話顯得過於臃腫,通常都是在各個服務其上用更輕量級的filebeat來簡單的收集日誌,而後多臺服務器的日誌發送到redis中,利用redis的消息隊列功能實現日誌的序列化,而後logstash再統一消費redis隊列中的日誌內容,利用其豐富的輸入處理格式化能力將日誌處理成便於閱讀的格式而後發送給elasticsearch存儲起來,接下來介紹filebeat和redis部分的內容。
5.Filebeat7.2版本安裝配置
filebeat7.2下載連接:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
解壓:切換到es_user用戶執行如下操做
tar -xzvf filebeat-7.2.0-linux-x86_64.tar.gz -C ../
filebeat讀取日誌文件發送到logstash配置:
logstash配置文件:
vim ./logstash_beat.conf #logstash_beat.conf內容 input { beats { host => '192.168.1.4' port => 5044 } } output { elasticsearch { hosts => ["http://192.168.1.4:9200"] } }
logstash以logstash_beat.conf啓動:
/opt/es/logstash-7.2.0/bin/logstash -f ./logstash_beat.conf
修改filebeat配置:
vim filebeat-7.2.0-linux-x86_64/filebeat.yml # filebeat.yml內容 filebeat.inputs: - type: log tail_files: true backoff: "1s" paths: - /opt/es/example.log output: logstash: hosts: ["192.168.1.4:5044"]
啓動filebeat:
./filebeat-7.2.0-linux-x86_64/filebeat -e -c ./filebeat-7.2.0-linux-x86_64/filebeat.yml
測試配置是否成功:
先刪除以前的索引,不然會和以前的logstash發往elasticsearch所創建的索引衝突,從而產生Can't get text on a START_OBJECT at 1:446錯誤響應
在kibana執行:
DELETE /logstash-2019.08.18-000001
而後在命令行執行:
echo 'file-->logstash-->elasticsearch-->kibana' >> example.log
數秒後在kibana查看elasticsearch索引內容:
能夠看到,咱們新加的filebeat配置也能夠成功的和ELK一塊兒工做了。
接下來就是最後一步了,把redis也引入到咱們的ELK之中,利用他的消息隊列功能實現多個filebeat同時向ELK發送日誌的功能。
6.引入Redis
redis的安裝配置詳細步驟請參考我dcsdn博客:https://blog.csdn.net/yanshaoshuai/article/details/97618991
在6379端口啓動redis以後filebeat配置以下:
vim ./filebeat-7.2.0-linux-x86_64/filebeat.yml # filebeat.yml內容 filebeat.inputs: - type: log tail_files: true backoff: "1s" paths: - /opt/es/example.log fields: type: example.log fields_under_root: true
output: redis: hosts: ["192.168.1.4:6379"] key: 'queue'
在logstash的config目錄下新建一個logstash_redis.conf配置文件,內容以下:
# logstash_redis.conf內容
input { redis { host => '192.168.1.4' port => 6379 key => "queue" data_type => "list" } } output { elasticsearch { hosts => ["http://192.168.1.4:9200"] } }
先保證redis正運行在你配置的端口,這裏是6379端口,而後啓動logstash和filebeat:
#以logstash_redis.conf文件啓動logstash /opt/es/logstash-7.2.0/bin/logstash -f ./logstash_redis.conf # 啓動filebeat ./filebeat-7.2.0-linux-x86_64/filebeat -e -c ./filebeat-7.2.0-linux-x86_64/filebeat.yml
logstash和filebeat啓動後,向example.log中追加一條數據:
echo 'file-->redis-->logstash-->elasticsearch-->kibana' >> example.log
數秒後在kibana執行查詢能夠看到咱們剛纔追加到example.log中的信息數據,證實如今filebeat-->redis-->logstash-->elasticsearch-->logstash的配置是成功的
固然也能夠用其它的消息隊列替代redis,企業中通常用的是kafka消息隊列,其原理與redis相似,就不贅述了。