生產環境Elk Stack部署文檔

對Linux有興趣的朋友加入QQ羣:476794643 在線交流
本文防盜鏈:https://blog.51cto.com/zhang789java

一、前言

因爲公司前期的elk日誌系統使用效果比較好,如今開發紛紛要求把線上的日誌都加到elk上面去,300多臺機器的日誌,一臺服務器確定承受不住了,爲了解決這個狀況,必需要作一個elk的集羣架構。node

二、拓撲圖

生產環境Elk Stack部署文檔

拓撲說明nginx

一、Filebeat:輕量級的日誌收集工具
二、阿里雲Redis:本身搭建的redis可擴張的方式沒有阿里雲方便
三、Logstash:用來對日誌文件的過濾,由於Logstash比較實用性能,和ES服務分開
四、ES:爲了保存數據,作兩臺集羣
五、Kibana/nginx:kibana性能要求不高,可是Kibana訪問沒有限制,爲了安全監聽在127.0.0.1,而後實用Nginx代理

三、資源申請記錄

生產環境Elk Stack部署文檔
總計:redis

Redis:1臺(4G單節點)
ESC:4臺(2核8G)
域名:log.ops.****.comdocker

四、初始化配置

4.一、購買Redis並初始化

(有的網友推薦我用docker跑Redis,但考慮到docker性能,並且阿里雲的Redis並不算貴,就選了阿里雲的Redis)apache

一、配置4G單節點

生產環境Elk Stack部署文檔

二、配置Redis並配置安全訪問

生產環境Elk Stack部署文檔

從上路能夠看到redis的鏈接地址,咱們的地址是隻能在內網能訪問,爲了讓全部機器都能寫到redis我這裏設置0.0.0.0/0全部內網地址均可以訪問json

生產環境Elk Stack部署文檔

4.二、購買Ecs並初始化系統

一、選購服務器這裏不作解釋
二、阿里雲的服務器好像沒有什麼能夠初始化的,重要的就是安全設置,而後把機器加入到監控、跳板機

五、部署Elk Stack

5.一、部署Elasticsearch集羣!

(1)配置java環境,且版本爲8,若是使用7可能會出現警告信息bootstrap

[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-ES-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(2)安裝Elasticsearchvim

[root@Ops-Elk-ES-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-ES-01 ~]# cat /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-ES-01 ~]# yum install elasticsearch

(3)配置Elasticsearch集羣tomcat

[root@Ops-Elk-ES-01 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster        #組播的名稱地址 
node.name: "node1"              #節點名稱,不能和其餘節點重複
path.data: /work/es/data        #數據目錄路徑
path.logs: /work/es/logs        #日誌文件路徑
bootstrap.mlockall: true        #內存不向swap交換
network.host: 192.168.90.201    #容許訪問的IP(本機IP地址)
http.port: 9200                 #啓用http
discovery.zen.ping.unicast.hosts: ["192.168.8.32", "192.168.8.33"] # 集羣節點,會自動發現節點
[root@Ops-Elk-ES-01 ~]# mkdir /data/elk/{data.logs} –p
[root@Ops-Elk-ES-01 ~]# chown elasticsearch:elasticsearch /data –R
[root@Ops-Elk-ES-01 ~]# systemctl start elasticsearch
ES-02 的配置只須要把node.name和IP地址改一下便可

(4)查看Elasticsearch集羣狀態

[root@Ops-Elk-ES-01 ~]# curl -XGET 'http://192.168.8.32:9200/_cat/nodes?v'
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.8.32            3          95   0    0.00    0.02     0.05 mdi       *      node-1
192.168.8.33            3          96   0    0.12    0.09     0.07 mdi       -      node-2

運維API:

1. 集羣狀態:http:// 192.168.8.32:9200/_cluster/health?pretty
2. 節點狀態:http:// 192.168.8.32:9200/_nodes/process?pretty
3. 分片狀態:http:// 192.168.8.32:9200/_cat/shards
4. 索引分片存儲信息:http:// 192.168.8.32:9200/index/_shard_stores?pretty
5. 索引狀態:http:// 192.168.8.32:9200/index/_stats?pretty
6. 索引元數據:http:// 192.168.8.32:9200/index?pretty

5.二、安裝Logstash

(1)配置java環境,且版本爲8,若是使用7可能會出現警告信息

[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-ES-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(1)安裝Logstash

[root@Ops-Elk-Logstash-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-Logstash-01 ~]# cat /etc/yum.repos.d/logstash.repo
[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Logstash-01 ~]# yum install logstash -y

如今已經安裝完成,由於咱們取日誌是在redis取出來使用logstash過濾寫到redis,等下會有logstash的案例

5.三、安裝Kibana/nginx

(1)配置java環境,且版本爲8,若是使用7可能會出現警告信息

[root@Ops-Elk-Kibana-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-Kibana-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(2)安裝Kibana

[root@Ops-Elk-Kibana-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-Kibana-01 ~]# cat /etc/yum.repos.d/kibana.repo
[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Kibana-01 ~]# yum install kibana

(3)配置Kibana

[root@Ops-Elk-Kibana-01 ~]# grep "^[a-z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://192.168.8.32:9200"
kibana.index: ".kibana"
[root@Ops-Elk-Kibana-01 ~]# systemctl start kibana

(4)添加Nginx反向代理

[root@Ops-Elk-Kibana-01 ~]# yum -y install nginx
[root@Ops-Elk-Kibana-01 ~]# cd /etc/nginx/conf.d/
[root@Ops-Elk-Kibana-01 conf.d]# touch elk.ops.qq.com.conf
[root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe
New password:
Re-type new password:
Adding password for user zhanghe
[root@Ops-Elk-Kibana-01 conf.d]# cat elk.ops.qq.com.conf
server {
        listen 80;
        server_name elk.ops.qq.com;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/kibana-user;

        location / {
        proxy_pass http://127.0.0.1:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
}
[root@Ops-Elk-Kibana-01 conf.d]# yum -y install httpd-tools
[root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe
New password:
Re-type new password:
Adding password for user zhanghe
[root@Ops-Elk-Kibana-01 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@Ops-Elk-Kibana-01 conf.d]# systemctl start nginx

(5)訪問kibana
生產環境Elk Stack部署文檔
生產環境Elk Stack部署文檔

5.四、安裝Filebeat

(1)安裝filebeat

[root@node-01:~]# cat /etc/yum.repos.d/filebeat.repo
[elastic-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Kibana-01 conf.d]# yum -y install filebeat

下面作一個簡單案例
生產環境Elk Stack部署文檔

5.五、案例:收集Tomcat catalina.out日誌

收集日誌的順序:

Tomcat_filebeat->Redis->logstash->Es->Kibana

(1)在兩臺tomcat上面配置filebeat寫到redis

[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
    - /usr/tomcat/apache-tomcat-7.0.78/logs/catalina.out
  document_type: tomcat-01
  multiline.pattern: '^2017-0'
  multiline.negate: true
  multiline.match: after

output.redis:
  hosts: ["r-****.redis.rds.aliyuncs.com:6379"]
  db: 0
  timeout: 5
  key: "tomcat-01"
[root@Tomcat-01:~]# systemctl start filebeat

(2)在logstash上面配置一個文件,從redis取出來數據寫到ES裏面

[root@Ops-Elk-ES-01 conf.d]# cat tomcat.conf
input {
  redis {
    type => "tomcat-01"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "tomcat-01"
  }
  redis {
    type => "tomcat-02"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "tomcat-02"
  }
}

output {
  if [type] == "tomcat-01" {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "tomcat-01-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "tomcat-02" {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "tomcat-02-%{+YYYY.MM.dd}"
    }
  }
}
[root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash

(3)在kibana上面添加ES索引
生產環境Elk Stack部署文檔

5.六、案例:收集Nginx Access日誌

收集日誌的順序:

Nginx_filebeat->Redis->logstash->Es->Kibana

(1)nginx日誌格式修改成json格式
格式1:

log_format access2 '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"url":"$request",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        #'"user_agent":"$http_user_agent",'
        '"status":"$status"}';

格式2:

log_format  access_log_json  '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';

應用日誌格式:

access_log  /var/www/logs/access.log  access2;

重啓nginx
(2)在nginx機器上面配置filebeat將日誌寫到redis

[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
- /var/www/logs/access.log
  document_type: nginx-01
  multiline.pattern: '^2017-0'
  multiline.negate: true
  multiline.match: after

output.redis:
  hosts: ["r-****.redis.rds.aliyuncs.com:6379"]
  db: 0
  timeout: 5
  key: "nginx-01"
[root@Tomcat-01:~]# systemctl start filebeat

(3)在logstash上面配置一個文件,從redis取出來數據寫到es裏面

[root@Ops-Elk-ES-01 conf.d]# cat nginx.conf
input {
  redis {
    type => "nginx-01"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "nginx-01"
  }
}

output {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "logstash-nginx-s4-access-01-%{+YYYY.MM.dd}"
    }
}
[root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash

(4)在kibana上面添加ES索引
生產環境Elk Stack部署文檔
生產環境Elk Stack部署文檔

5.七、效果展現

最好能夠根據Elk收集的日誌,建立日誌分析圖形,好了,這就不過多講解了,有興趣的能夠加入咱們QQ羣,一塊兒討論,解決問題,讓學習Elk stack再也不成爲難題

對Linux有興趣的朋友加入QQ羣:476794643在線交流
本文防盜鏈:https://blog.51cto.com/zhang789

生產環境Elk Stack部署文檔
生產環境Elk Stack部署文檔

相關文章
相關標籤/搜索