微服務日誌打印。
轉載於http://www.eryajf.net/2369.htmlhtml
上邊是輸出了nginx日誌,從而進行展現,以及各類繪圖分析,而如今的需求是,要將微服務當中的日誌彙總到elk當中以便開發查詢日誌定位問題。nginx
都知道,微服務第一個特色就是,多,不只項目多,並且每每單臺主機當中也會有多個應用,所以多個日誌文件狀況下,如何處理才更加快速便捷呢,這裏使用了filebeat來做爲日誌轉發組件。spring
架構如圖:json
1,配置filebeat。
主機規劃以下圖簡示:tomcat
主機 | 組件 |
---|---|
192.168.100.21 | spring-cloud,filebeat-6.5.3 |
192.168.100.21 | spring-cloud,filebeat-6.5.3 |
192.168.10.10 | logstash-6.5.3,elk |
像剛剛那樣,配置好yun源,而後直接安裝。架構
yum -y install filebeat
而後來配置filebeat。elasticsearch
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- input_type: log
paths:
- /home/ishangjie/ishangjie-config-server/normal/*.log
type: "wf1-config"
fields:
logsource: 192.168.100.21
logtype: wf1-config
- input_type: log
paths:
- /home/ishangjie/ishangjie-eureka-server/normal/*.log
type: "wf1-eureka"
fields:
logsource: 192.168.100.21
logtype: wf1-eureka
- input_type: log
paths:
- /home/ishangjie/ishangjie-gateway-server/normal/*.log
type: "wf1-gateway"
fields:
logsource: 192.168.100.21
logtype: wf1-gateway
output.logstash:
hosts: ["192.168.10.10:5044"]
EOF
- 多個input定義多個應用日誌路徑,且能夠用*.log進行匹配,默認讀取目錄下最新的日誌。
- 每一個裏邊都定義一個type類型,從而便於上下文銜接。
- 最後定義日誌輸出到elk的logstash的5044端口。
再去配置一下另一臺主機。微服務
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- input_type: log
paths:
- /home/ishangjie/ishangjie-activity-service/normal/*.log
type: "wf5-activity"
fields:
logsource: 192.168.100.25
logtype: wf5-activity
- input_type: log
paths:
- /home/ishangjie/ishangjie-order-service/normal/*.log
type: "wf5-order"
fields:
logsource: 192.168.100.25
logtype: wf5-order
- input_type: log
paths:
- /home/ishangjie/ishangjie-user-service/normal/*.log
type: "wf5-user"
fields:
logsource: 192.168.100.25
logtype: wf5-user
- input_type: log
paths:
- /home/ishangjie/ishangjie-thirdparty-service/normal/*.log
type: "wf5-thirdparty"
fields:
logsource: 192.168.100.25
logtype: wf5-thirdparty
output.logstash:
hosts: ["192.168.10.10:5045"]
EOF
- 基本上配置與上邊差很少,須要注意的一個地方就是output的logstash的端口,與上臺主機不要一致,由於咱們要啓動多個實例進行管理的。
啓動filebeat。ui
新版本配置實例url
filebeat.prospectors:
- type: log
paths:
- /data/tomcat/tomcat1/logs/*.log
fields:
logsource: 172.18.45.88
logtype: tomcatlog
- type: log
paths:
- /var/log/secure
fields:
logsource: 172.18.45.80
logtype: systemlog
- type: log
paths:
- /data/log/nginx/t.log*
fields:
logsource: 172.18.45.99
logtype: nginx_acclog
- type: log
paths:
- /data/log/nginx/error_t.log*
fields:
logsource: 172.18.45.85
logtype: nginx_errlog
output.kafka:
enabled: true
hosts: ["172.18.45.76:9092","172.18.45.75:9092","172.18.45.88:9092"]
topic: kafka_run_log
-----------------------
filebeat.prospectors:
- input_type: log
enabled: true
paths:
- /data/log/nginx/t.log*
fields:
log_topics: nginxlog
json.keys_under_root: true
json.overwrite_keys: true
output.kafka:
enabled: true
hosts: ["172.18.45.79:9092","172.18.45.78:9092","172.18.45.80:9092"]
topic: '%{[fields][log_topics]}'
partition.round_robin:
reachable_only: false
compression: gzip
max_message_bytes: 1000000
required_acks: 1
systemctl enable filebeat
systemctl start filebeat
systemctl status filebeat
2,配置logstash。
針對上邊兩個主機轉過來的日誌,在elk主機上添加相對應的配置進行接收。
A
:
cat > /etc/logstash/conf.d/wf1.conf << EOF
input {
beats {
port => "5044"
host => "192.168.100.21"
}
}
filter {
if [fields][logtype] == "wf1-config" {
json {
source => "message"
target => "data"
}
}
if [fields][logtype] == "wf1-eureka" {
json {
source => "message"
target => "data"
}
}
if [fields][logtype] == "wf1-gateway" {
json {
source => "message"
target => "data"
}
}
}
output {
if [fields][logtype] == "wf1-config" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf1-config-%{+YYYY.MM.dd}"
}
}
if [fields][logtype] == "wf1-eureka" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf1-eureka-%{+YYYY.MM.dd}"
}
}
if [fields][logtype] == "wf1-gateway" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf1-gateway-%{+YYYY.MM.dd}"
}
}
}
EOF
B
:
cat > /etc/logstash/conf.d/wf5.conf << EOF
input {
beats {
port => 5052
host => "192.168.100.25"
}
}
filter {
if [fields][logtype] == "wf5-activity" {
json {
source => "message"
target => "data"
}
}
if [fields][logtype] == "wf5-order" {
json {
source => "message"
target => "data"
}
}
if [fields][logtype] == "wf5-user" {
json {
source => "message"
target => "data"
}
}
if [fields][logtype] == "wf5-thirdparty" {
json {
source => "message"
target => "data"
}
}
}
output {
if [fields][logtype] == "wf5-activity" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf5-activity-%{+YYYY.MM.dd}"
}
}
if [fields][logtype] == "wf5-order" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf5-order-%{+YYYY.MM.dd}"
}
}
if [fields][logtype] == "wf5-user" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf5-user-%{+YYYY.MM.dd}"
}
}
if [fields][logtype] == "wf5-thirdparty" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "wf5-thirdparty-%{+YYYY.MM.dd}"
}
}
}
EOF
- 這裏經過端口做爲豁口,讓彼此成爲鏈接,注意要一一對應。
- 照單全收日誌,而後轉手發給本機的es同窗。
啓動這兩個實例。
nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wf1.conf --path.data=/usr/share/logstash/data5 &> /logs/logstash_nohup/wf1.out &
nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wf5.conf --path.data=/usr/share/logstash/data9 &> /logs/logstash_nohup/wf5.out &
啓動以後能夠按上邊演示過的步驟,在kibana當中添加索引,而後查看日誌。
3,合理規劃。
- 關於索引。
- 上邊的方式是一個服務配置了一個索引,衆所周知,微服務第一大特色就是多,兩個環境下來,發現按這種方式分配索引的話,會致使es裏邊集聚不少的索引。這是其一。
- 關於端口。
- 按上邊的思路,基本上是外部的一臺主機,就對應啓動了一個端口,這樣很容易端口浪費,因此能夠進行一下合理規劃與配置。
- 解決上邊兩個問題。
- 索引的話,我這邊規劃的是一臺主機一個索引,而非一個服務一個索引。如此算下來,能夠從原來二三十個索引縮減到十個之內。固然還能夠從其餘維度來進行區分。具體操做的辦法很是簡單,那就是在配置logstash實例的時候,在output處索引歸攏便可。
- 端口方面,個人規劃是一類環境公用一個端口,原來預發線上一共十臺服務用了十個端口,如今預發用一個,線上用一個。具體操做就是filebeat客戶端端口統一,而後logstash實例彙總到一塊兒便可。