Centos7下ELK+Redis日誌分析平臺的集羣環境部署記錄

 

以前的文檔介紹了ELK架構的基礎知識,日誌集中分析系統的實施方案:
- ELK+Redis
- ELK+Filebeat
- ELK+Filebeat+Redis
- ELK+Filebeat+Kafka+ZooKeeperjavascript

ELK進一步優化架構爲EFK,其中F就表示Filebeat。Filebeat便是輕量級數據收集引擎,基於原先Logstash-fowarder 的源碼改造出來。換句話說:Filebeat就是新版的 Logstash-fowarder,也會是ELK Stack在shipper端的第一選擇。php

這裏選擇ELK+Redis的方式進行部署,下面簡單記錄下ELK結合Redis搭建日誌分析平臺的集羣環境部署過程,大體的架構以下:css

+ Elasticsearch是一個分佈式搜索分析引擎,穩定、可水平擴展、易於管理是它的主要設計初衷
+ Logstash是一個靈活的數據收集、加工和傳輸的管道軟件
+ Kibana是一個數據可視化平臺,能夠經過將數據轉化爲酷炫而強大的圖像而實現與數據的交互將三者的收集加工,存儲分析和可視轉化整合在一塊兒就造成了ELK。html

基本流程:
1)Logstash-Shipper獲取日誌信息發送到redis。
2)Redis在此處的做用是防止ElasticSearch服務異常致使丟失日誌,提供消息隊列的做用。[注意,測試時若是寫到redis裏的日誌量比較小,則很快就會被輸送到elasticsearch,輸送完以後,屆時在redis裏的key就沒有了,也就查看不到了.]
3)logstash是讀取Redis中的日誌信息發送給ElasticSearch。
4)ElasticSearch提供日誌存儲和檢索。
5)Kibana是ElasticSearch可視化界面插件。java

1)機器環境node

主機名           ip地址              部署的服務
elk-node01      192.168.10.213      es01,redis01
elk-node02      192.168.10.214      es02,redis02(vip:192.168.10.217)
elk-node03      192.168.10.215      es03,kibana,nginx
 
三臺節點都是centos7.4系統
[root@elk-node01 ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
 
三臺節點各自修改主機名
[root@localhost ~]# hostname elk-node01
[root@localhost ~]# hostnamectl set-hostname elk-node01
 
關閉三臺節點的iptables和selinux
[root@elk-node01 ~]# systemctl stop firewalld.service
[root@elk-node01 ~]# systemctl disable firewalld.service
[root@elk-node01 ~]# firewall-cmd --state
not running
 
[root@elk-node01 ~]# setenforce 0
[root@elk-node01 ~]# getenforce
Disabled
[root@elk-node01 ~]# vim /etc/sysconfig/selinux
......
SELINUX=disabled
 
三臺節點機都要作下hosts綁定
[root@elk-node01 ~]# cat /etc/hosts
......
192.168.10.213 elk-node01
192.168.10.214 elk-node02
192.168.10.215 elk-node03
 
同步三臺節點機的系統時間
[root@elk-node01 ~]# yum install -y ntpdate
[root@elk-node01 ~]# ntpdate ntp1.aliyun.com
 
三臺節點都要部署java8環境
下載地址:https://pan.baidu.com/s/1pLaAjPp
提取密碼:x27s
  
[root@elk-node01 ~]# rpm -ivh jdk-8u131-linux-x64.rpm --force
[root@elk-node01 ~]# vim /etc/profile
......
JAVA_HOME=/usr/java/jdk1.8.0_131
JAVA_BIN=/usr/java/jdk1.8.0_131/bin
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/bin:/sbin/
CLASSPATH=.:/lib/dt.jar:/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
  
[root@elk-node01 ~]# source /etc/profile
[root@elk-node01 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

2)部署ElasticSearch集羣環境linux

a)安裝Elasticsearch(三臺節點都要操做。部署的時候,要求三臺節點機器都能正常對外訪問,正常聯網)
[root@elk-node01 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

[root@elk-node01 ~]# yum install -y elasticsearch

b)配置Elasticsearch集羣
elk-node01節點的配置
[root@elk-node01 ~]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[root@elk-node01 ~]# cat /etc/elasticsearch/elasticsearch.yml|grep -v "#"
cluster.name: kevin-elk        #集羣名稱,三個節點的集羣名稱配置要同樣
node.name: elk-node01          #集羣節點名稱,通常爲本節點主機名。注意這個要是能ping通的,即在各節點的/etc/hosts裏綁定。
path.data: /data/es-data       #集羣數據存放目錄,注意目錄權限要是elasticsearch
path.logs: /var/log/elasticsearch       #日誌路徑,默認就是這個路徑
network.host: 192.168.10.213       #服務綁定的網絡地址,通常填寫本節點ip;也能夠填寫0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"]      #添加集羣中的主機地址,會自動發現並自動選擇master主節點

[root@elk-node01 ~]# mkdir -p /data/es-data
[root@elk-node01 ~]# mkdir -p /var/log/elasticsearch/ 
[root@elk-node01 ~]# chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node01 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/

[root@elk-node01 ~]# systemctl daemon-reload
[root@elk-node01 ~]# systemctl enable elasticsearch
[root@elk-node01 ~]# systemctl start elasticsearch
[root@elk-node01 ~]# systemctl status elasticsearch
[root@elk-node01 ~]# lsof -i:9200

-------------------------------------------------------------------------------------
elk-node02節點的配置
[root@elk-node02 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "#"
cluster.name: kevin-elk
node.name: elk-node02
path.data: /data/es-data
path.logs: /var/log/elasticsearch
network.host: 192.168.10.214
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"]

[root@elk-node02 ~]# mkdir -p /data/es-data
[root@elk-node02 ~]# mkdir -p /var/log/elasticsearch/
[root@elk-node02 ~]# chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node02 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/

[root@elk-node02 ~]# systemctl daemon-reload
[root@elk-node02 ~]# systemctl enable elasticsearch
[root@elk-node02 ~]# systemctl start elasticsearch
[root@elk-node02 ~]# systemctl status elasticsearch
[root@elk-node02 ~]# lsof -i:9200

-------------------------------------------------------------------------------------
elk-node03節點的配置
[root@elk-node03 ~]# cat /etc/elasticsearch/elasticsearch.yml|grep -v "#"
cluster.name: kevin-elk
node.name: elk-node03
path.data: /data/es-data
path.logs: /var/log/elasticsearch
network.host: 192.168.10.215
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"]

[root@elk-node03 ~]# mkdir -p /data/es-data
[root@elk-node03 ~]# mkdir -p /var/log/elasticsearch/
[root@elk-node03 ~]# chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node03 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/

[root@elk-node03 ~]# systemctl daemon-reload
[root@elk-node03 ~]# systemctl enable elasticsearch
[root@elk-node03 ~]# systemctl start elasticsearch
[root@elk-node03 ~]# systemctl status elasticsearch
[root@elk-node03 ~]# lsof -i:9200

c)查看elasticsearch集羣信息(下面命令在任意一個節點機器上操做均可以)
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/nodes'
192.168.10.213 192.168.10.213 8 49 0.01 d * elk-node01        #帶*號表示該節點是master主節點。   
192.168.10.214 192.168.10.214 8 49 0.00 d m elk-node02 
192.168.10.215 192.168.10.215 8 59 0.00 d m elk-node03

後面添加 ?v ,表示詳細顯示
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/nodes?v'
host           ip             heap.percent ram.percent load node.role master name       
192.168.10.213 192.168.10.213            8          49 0.00 d         *      elk-node01 
192.168.10.214 192.168.10.214            8          49 0.06 d         m      elk-node02 
192.168.10.215 192.168.10.215            8          59 0.00 d         m      elk-node03 

查詢集羣狀態方法
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/state/nodes?pretty'
{
  "cluster_name" : "kevin-elk",
  "nodes" : {
    "1GGuoA9FT62vDw978HSBOA" : {
      "name" : "elk-node01",
      "transport_address" : "192.168.10.213:9300",
      "attributes" : { }
    },
    "EN8L2mP_RmipPLF9KM5j7Q" : {
      "name" : "elk-node02",
      "transport_address" : "192.168.10.214:9300",
      "attributes" : { }
    },
    "n75HL99KQ5GPqJDk6F2W2A" : {
      "name" : "elk-node03",
      "transport_address" : "192.168.10.215:9300",
      "attributes" : { }
    }
  }
}

查詢集羣中的master
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/state/master_node?pretty'
{
  "cluster_name" : "kevin-elk",
  "master_node" : "1GGuoA9FT62vDw978HSBOA"
}

或者
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/master?v'
id                     host           ip             node       
1GGuoA9FT62vDw978HSBOA 192.168.10.213 192.168.10.213 elk-node01 

查詢集羣的健康狀態(一共三種狀態:green、yellow,red;其中green表示健康)
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/health?v'
epoch      timestamp cluster   status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 
1527576950 14:55:50  kevin-elk green           3         3      0   0    0    0        0             0                  -                100.0% 

或者
[root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/health?pretty'
{
  "cluster_name" : "kevin-elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


d)在線安裝elasticsearch插件(三個節點上都要操做,且機器都要能對外正常訪問)
安裝head插件
[root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
Downloading .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /usr/share/elasticsearch/plugins/head

安裝kopf插件
[root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
-> Installing lmenezes/elasticsearch-kopf...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed kopf into /usr/share/elasticsearch/plugins/kopf

安裝bigdesk插件
[root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install hlstudio/bigdesk
-> Installing hlstudio/bigdesk...
Trying https://github.com/hlstudio/bigdesk/archive/master.zip ...
Downloading ................................................................................................................................................................................................................................DONE
Verifying https://github.com/hlstudio/bigdesk/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed bigdesk into /usr/share/elasticsearch/plugins/bigdesk

三個插件安裝後,記得給plugins目錄受權,並重啓elasticsearch服務
[root@elk-node01 ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node01 ~]# ll /usr/share/elasticsearch/plugins
total 4
drwxr-xr-x. 3 elasticsearch elasticsearch  124 May 29 14:58 bigdesk
drwxr-xr-x. 6 elasticsearch elasticsearch 4096 May 29 14:56 head
drwxr-xr-x. 8 elasticsearch elasticsearch  230 May 29 14:57 kopf
[root@elk-node01 ~]# systemctl restart elasticsearch
[root@elk-node01 ~]# lsof -i:9200                         #服務重啓後,9200端口稍過一下子才能起來
COMMAND   PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    31855 elasticsearch  107u  IPv6  87943      0t0  TCP elk-node01:wap-wsp (LISTEN)

最後就能夠查看插件狀態,直接訪問http://ip:9200/_plugin/"插件名";
head集羣管理界面的狀態圖,五角星表示該節點爲master;
這裏在三個節點機上安裝了插件,因此三個節點均可以訪問插件狀態。

好比用elk-node01節點的ip地址訪問這三個插件,分別是http://192.168.10.213:9200/_plugin/head、http://192.168.10.213:9200/_plugin/kopf、http://192.168.10.213:9200/_plugin/bigdesk,以下:nginx

 3)Redis+Keepalived高可用環境部署記錄git

參考另外一篇文檔:https://www.cnblogs.com/kevingrace/p/9001975.html
部署過程在此省略

[root@elk-node01 ~]# redis-cli -h 192.168.10.213 INFO|grep role
role:master
[root@elk-node01 ~]# redis-cli -h 192.168.10.214 INFO|grep role
role:slave
[root@elk-node01 ~]# redis-cli -h 192.168.10.217 INFO|grep role
role:master

[root@elk-node01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ae:01:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.213/24 brd 192.168.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.10.217/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::7562:4278:d71d:f862/64 scope link 
       valid_lft forever preferred_lft forever

即redis的master主節點一開始在elk-node01節點上。

4)Kibana及nginx代理訪問環境部署(訪問權限控制)。在elk-node03節點機上操做github

a)kibana安裝配置(官網下載地址:https://www.elastic.co/downloads)
[root@elk-node03 ~]# cd /usr/local/src/
[root@elk-node03 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.6.6-linux-x86_64.tar.gz
[root@elk-node03 src]# tar -zvxf kibana-4.6.6-linux-x86_64.tar.gz

因爲維護的業務系統比較多,每一個系統下的業務日誌在kibana界面展現的訪問權限只給該系統相關人員開放,對系統外人員不開放。因此須要作kibana權限控制。
這裏經過nginx的訪問驗證配置來實現。

能夠配置多個端口的kibana,每一個系統單獨開一個kibana端口號,好比財務系統kibana使用5601端口、租賃系統kibana使用5602,而後nginx作代理訪問配置。
每一個系統的業務日誌單獨在其對應的端口的kibana界面裏展現。

[root@elk-node03 src]# cp -r kibana-4.6.6-linux-x86_64 /usr/local/nc-5601-kibana
[root@elk-node03 src]# cp -r kibana-4.6.6-linux-x86_64 /usr/local/zl-5602-kibana
[root@elk-node03 src]# ll -d /usr/local/*-kibana
drwxr-xr-x. 11 root root 203 May 29 16:49 /usr/local/nc-5601-kibana
drwxr-xr-x. 11 root root 203 May 29 16:49 /usr/local/zl-5602-kibana

修改配置文件:
[root@elk-node03 src]# vim /usr/local/nc-5601-kibana/config/kibana.yml
......
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.10.213:9200"                #添加elasticsearch的master主節點的ip地址
kibana.index: ".nc-kibana"

[root@elk-node03 src]# vim /usr/local/zl-5602-kibana/config/kibana.yml 
......
server.port: 5602
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.10.213:9200"
kibana.index: ".zl-kibana"

安裝screen,並啓動kibana
[root@elk-node03 src]# yum -y install screen

[root@elk-node03 src]# screen 
[root@elk-node03 src]# /usr/local/nc-5601-kibana/bin/kibana          #按鍵ctrl+a+d將其放在後臺執行

[root@elk-node03 src]# screen 
[root@elk-node03 src]# /usr/local/zl-5602-kibana/bin/kibana          #按鍵ctrl+a+d將其放在後臺執行

[root@elk-node03 src]# lsof -i:5601
COMMAND   PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
node    32627 root   13u  IPv4 1028042      0t0  TCP *:esmagent (LISTEN)

[root@elk-node03 src]# lsof -i:5602
COMMAND   PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
node    32659 root   13u  IPv4 1029133      0t0  TCP *:a1-msc (LISTEN)

--------------------------------------------------------------------------------------
接着配置nginx的反向代理以及訪問驗證
[root@elk-node03 ~]# yum -y install gcc pcre-devel zlib-devel openssl-devel
[root@elk-node03 ~]# cd /usr/local/src/
[root@elk-node03 src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz
[root@elk-node03 src]# tar -zvxf nginx-1.9.7.tar.gz 
[root@elk-node03 src]# cd nginx-1.9.7
[root@elk-node03 nginx-1.9.7]# useradd www -M -s /sbin/nologin 
[root@elk-node03 nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre
[root@elk-node03 nginx-1.9.7]# make && make install

nginx的配置
[root@elk-node03 nginx-1.9.7]# cd /usr/local/nginx/conf/
[root@elk-node03 conf]# cp nginx.conf nginx.conf.bak
[root@elk-node03 conf]# cat nginx.conf
user  www;
worker_processes  8;
 
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
 
events {
    worker_connections  65535;
}
 
 
http {
    include       mime.types;
    default_type  application/octet-stream;
    charset utf-8;
       
    ######
    ## set access log format
    ######
    log_format  main  '$http_x_forwarded_for $remote_addr $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_cookie" $host $request_time';
 
    #######
    ## http setting
    #######
    sendfile       on;
    tcp_nopush     on;
    tcp_nodelay    on;
    keepalive_timeout  65;
    proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;
    proxy_temp_path /var/www/cache/tmp;
 
    fastcgi_connect_timeout 3000;
    fastcgi_send_timeout 3000;
    fastcgi_read_timeout 3000;
    fastcgi_buffer_size 256k;
    fastcgi_buffers 8 256k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_intercept_errors on;
 
    #
    client_header_timeout 600s;
    client_body_timeout 600s;
   # client_max_body_size 50m;
    client_max_body_size 100m;     
    client_body_buffer_size 256k;      
 
    gzip  on;
    gzip_min_length  1k;
    gzip_buffers     4 16k;
    gzip_http_version 1.1;
    gzip_comp_level 9;
    gzip_types       text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php;
    gzip_vary on;
 
    ## includes vhosts
    include vhosts/*.conf;
}


[root@elk-node03 conf]# mkdir vhosts
[root@elk-node03 conf]# cd vhosts/
[root@elk-node03 vhosts]# vim nc_kibana.conf
 server {
   listen 15601;
   server_name localhost;

   location / {
     proxy_pass http://192.168.10.215:5601/;
     auth_basic "Access Authorized";
     auth_basic_user_file /usr/local/nginx/conf/nc_auth_password;
   }
}

[root@elk-node03 vhosts]# vim zl_kibana.conf
 server {
   listen 15602;
   server_name localhost;

   location / {
     proxy_pass http://192.168.10.215:5602/;
     auth_basic "Access Authorized";
     auth_basic_user_file /usr/local/nginx/conf/zl_auth_password;
   }
}


[root@elk-node03 vhosts]# /usr/local/nginx/sbin/nginx 
[root@elk-node03 vhosts]# /usr/local/nginx/sbin/nginx -s reload
[root@elk-node03 vhosts]# lsof -i:15601
[root@elk-node03 vhosts]# lsof -i:15602
---------------------------------------------------------------------------------------------
設置驗證訪問
建立類htpasswd文件(若是沒有htpasswd命令,可經過"yum install -y *htpasswd*"或"yum install -y httpd")
[root@elk-node03 vhosts]# yum install -y *htpasswd*

建立財務系統日誌的kibana訪問的驗證權限
[root@elk-node03 vhosts]# htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog
New password: 
Re-type new password: 
Adding password for user nclog
[root@elk-node03 vhosts]# cat /usr/local/nginx/conf/nc_auth_password
nclog:$apr1$WLHsdsCP$PLLNJB/wxeQKy/OHp/7o2.

建立租賃系統日誌的kibana訪問的驗證權限
[root@elk-node03 vhosts]# htpasswd -c /usr/local/nginx/conf/zl_auth_password zllog
New password: 
Re-type new password: 
Adding password for user zllog
[root@elk-node03 vhosts]# cat /usr/local/nginx/conf/zl_auth_password
zllog:$apr1$dRHpzdwt$yeJxnL5AAQh6A6MJFPCEM1

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
htpasswd命令的使用技巧
1) 首次生成驗證文件,使用-c參數,建立時後面跟一個用戶名,可是不能直接跟密碼,須要回車輸入兩次密碼
# htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog

2)在驗證文件生成後,後續添加用戶,使用-b參數,後面能夠直接跟用戶名和密碼。
   注意這時不能加-c參數,不然會將以前建立的用戶信息覆蓋掉。
# htpasswd -b /usr/local/nginx/conf/nc_auth_password kevin kevin@123

3)刪除用於,使用-D參數。
#htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin

4)修改用戶密碼(能夠先刪除,再建立)
# htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin
# htpasswd -b /usr/local/nginx/conf/nc_auth_password kevin keivn@#2312
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

5)客戶機日誌收集操做(Logstash)

1)安裝logstash
[root@elk-client ~]# cat /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

[root@elk-client ~]# yum install -y logstash
[root@elk-client ~]# ll -d /opt/logstash/
drwxr-xr-x. 5 logstash logstash 160 May 29 17:45 /opt/logstash/

2)調整java環境。
有些服務器因爲業務代碼自身限制只能用java6或java7,而新版logstash要求java8環境。
這種狀況下,要安裝Logstash,就只能單獨配置Logstas本身使用的java8環境了。
[root@elk-client ~]# java -version
java version "1.6.0_151"
OpenJDK Runtime Environment (rhel-2.6.11.0.el6_9-x86_64 u151-b00)
OpenJDK 64-Bit Server VM (build 24.151-b00, mixed mode)

下載jdk-8u172-linux-x64.tar.gz,放到/usr/local/src目錄下
下載地址:https://pan.baidu.com/s/1z3L4Q24AuHA2r6KT6oT9vw
提取密碼:dprz

[root@elk-client ~]# cd /usr/local/src/
[root@elk-client src]# tar -zvxf jdk-8u172-linux-x64.tar.gz
[root@elk-client src]# mv jdk1.8.0_172 /usr/local/

在/etc/sysconfig/logstash文件結尾添加下面兩行內容:
[root@elk-client src]# vim /etc/sysconfig/logstash
.......
JAVA_CMD=/usr/local/jdk1.8.0_172/bin
JAVA_HOME=/usr/local/jdk1.8.0_172
 
在/opt/logstash/bin/logstash.lib.sh文件添加下面一行內容:
[root@elk-client src]# vim /opt/logstash/bin/logstash.lib.sh
.......
export JAVA_HOME=/usr/local/jdk1.8.0_172
 
這樣使用logstash收集日誌,就不會報java版本的錯誤了。

3)使用logstash收集日誌
------------------------------------------------------------
好比收集財務系統的日誌
[root@elk-client ~]# mkdir /opt/nc
[root@elk-client ~]# cd /opt/nc
[root@elk-client nc]# vim redis-input.conf 
input {
    file {
       path => "/data/nc-tomcat/logs/catalina.out"
       type => "nc-log"
       start_position => "beginning"
       codec => multiline {
           pattern => "^[a-zA-Z0-9]|[^ ]+"           #收集以字母(大小寫)或數字或空格開頭的日誌信息   
           negate => true
           what => "previous"
       }
    }
}
   
output {
    if [type] == "nc-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "1"
          data_type => "list"
          key => "nc-log"
       }
     }
}

[root@elk-client nc]# vim file.conf
input {
     redis {
        type => "nc-log"
        host => "192.168.10.217"                  #redis高可用的vip地址
        port => "6379"
        db => "1"
        data_type => "list"
        key => "nc-log"
     }
}
    
    
output {
    if [type] == "nc-log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]             #elasticsearch集羣的master主節點地址
           index => "nc-app01-nc-log-%{+YYYY.MM.dd}"
        }
    }
}

驗證收集日誌的logstash文件是否配置OK
[root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf --configtest
Configuration OK
[root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/file.conf --configtest
Configuration OK

啓動收集日誌的logstash程序
[root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf &
[root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/file.conf &
[root@elk-client nc]# ps -ef|grep logstash

-------------------------------------------------------------------
再好比收集租賃系統的日誌
[root@elk-client ~]# mkdir /opt/zl
[root@elk-client ~]# cd /opt/zl
[root@elk-client zl]# vim redis-input.conf 
input {
    file {
       path => "/data/zl-tomcat/logs/catalina.out"
       type => "zl-log"
       start_position => "beginning"
       codec => multiline {
           pattern => "^[a-zA-Z0-9]|[^ ]+"          
           negate => true
           what => "previous"
       }
    }
}
   
output {
    if [type] == "zl-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "2"
          data_type => "list"
          key => "zl-log"
       }
     }
}
[root@elk-client zl]# vim file.conf 
input {
     redis {
        type => "zl-log"
        host => "192.168.10.217"
        port => "6379"
        db => "2"
        data_type => "list"
        key => "zl-log"
     }
}
    
    
output {
    if [type] == "zl-log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]
           index => "zl-app01-zl-log-%{+YYYY.MM.dd}"
        }
    }
}

[root@elk-client zl]# /opt/logstash/bin/logstash -f /opt/zl/redis-input.conf --configtest
Configuration OK
[root@elk-client zl]# /opt/logstash/bin/logstash -f /opt/zl/file.conf --configtest
Configuration OK
[root@elk-client zl]# ps -ef|grep logstash

當上面財務和租賃系統日誌有新數據寫入時,日誌就會被logstash收集起來,並最終經過各自的kibana進行web展現。

訪問head插件就能夠看到收集的日誌信息(在logstash程序啓動後,當有新日誌數據寫入時,纔會在head插件訪問界面裏展現)

 添加財務系統kibana日誌展現

 

 添加租賃系統kibana日誌展現

========Logstash之multiline插件(匹配多行日誌)使用說明========
在處理日誌時,除了訪問日誌外,還要處理運行時日誌,該日誌大都用程序寫的,好比log4j。運行時日誌跟訪問日誌最大的不一樣是,運行時日誌是多行,也就是說,連續的多行才能表達一個意思。若是能按多行處理,那麼把它們拆分到字段就很容易了。這裏就須要說下Logstash的multiline插件,用於匹配多行日誌。首先看下面一個java日誌:

[2016-05-20 11:54:24,106][INFO][cluster.metadata ] [node-1][.kibana] creating index,cause [api],template [],shards [1]/[1],mappings [config]
     
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
     at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
     at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
     at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
     at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
     at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
     at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
     at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
     at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
     at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
     at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)

再看看這些日誌信息在kibana界面裏的展現效果:

能夠看到,每一行at其實都屬於一個事件的信息,可是Logstash卻使用了多行顯示出來,這樣會形成閱讀不便。爲了解決這個問題,可使用Logstash input插件中的file插件,其中還有一個子功能是Codec-->multiline。官方對於multiline插件的描述是「Merges multiline messages into a single event」,翻譯過來就是將多行信息合併爲單一事件。

登陸客戶機器查看Java日誌,發現每個單獨的事件都是以「[ ]」方括號開始的,因此能夠把這個方括號當作特徵,再結合multiline插件來實現合併信息。使用插件的語法以下,主要含義是「把任何不以[開頭的行,都與前面不是[開頭的行合併成一個事件」:

[root@elk-client zl]# vim redis-input.conf
input {
    file {
       path => "/data/zl-tomcat/logs/catalina.out"
       type => "zl-log"
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["        
           negate => true        
           what => previous
       }
    }
}
    
output {
    if [type] == "zl-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "2"
          data_type => "list"
          key => "zl-log"
       }
     }
}

解釋說明:
pattern => "^\["         這是正則表達式,用來作規則匹配。匹配多行日誌的方式要根據實際日誌信息進行正則匹配,這裏是以"[",也能夠是正則匹配,以日誌具體狀況而定。
negate => true         這個negate是對pattern的結果作判斷是否匹配,默認值是false表明匹配,而true表明不匹配,這裏並無反,由於negate自己是否認的意思,在這裏就是不以大括號開頭的內容纔算符合條件,後續纔會進行合併操做。
what => previous     next或者previous二選一,previous表明codec將把匹配內容與以前的內容合併,next表明以後的內容。

通過插件整理後的信息在kibana界面裏查看就直觀多了,以下圖:

multiline 字段屬性
對multiline 插件來講,有三個設置比較重要:negate、pattern 和 what。
negate
- 類型是 boolean
- 默認爲 false
否認正則表達式(若是沒有匹配的話)。

pattern
- 必須設置
- 類型爲 string
- 沒有默認值
要匹配的正則表達式。

what
- 必須設置
- 能夠爲 previous 或 next
- 沒有默認值
若是正則表達式匹配了,那麼該事件是屬於下一個或是前一個事件?

==============================================
再來看一例:

看下面的java日誌:
[root@elk-client ~]# tail -f /data/nc-tomcat/logs/catalina.out
........
........
$$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[nc.bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR  $$msg=<Select CacheTabName, CacheTabVersion From BD_cachetabversion where CacheTabVersion >= null order by CacheTabVersion desc>throws ORA-00942: 表或視圖不存在
 
$$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[nc.bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR  $$msg=sql original exception
java.sql.SQLException: ORA-00942: 表或視圖不存在

      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
      at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
      at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
      at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
      at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
      at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
      at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
      at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
      at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
      at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)
      at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1203)
      at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteQuery(WSJdbcPreparedStatement.java:1110)
      at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery(WSJdbcPreparedStatement.java:712)
      at nc.jdbc.framework.crossdb.CrossDBPreparedStatement.executeQuery(CrossDBPreparedStatement.java:103)
      at nc.jdbc.framework.JdbcSession.executeQuery(JdbcSession.java:297)

從以上日誌能夠看出,每個單獨的事件都是以"$"開始的,因此能夠把這個方括號當作特徵,結合multiline插件來實現合併信息。
[root@elk-client nc]# vim redis-input.conf
input {
    file {
       path => "/data/nc-tomcat/logs/catalina.out"
       type => "nc-log"
       start_position => "beginning"
       codec => multiline {
           pattern => "^\$"        #匹配以$開頭的日誌信息。(若是日誌每行是以日期開頭顯示,好比"2018-05-30 11:42.....",則此行就配置爲pattern => "^[0-9]",即表示匹配以數字開頭的行)
           negate => true          #不匹配
           what => "previous"      #即上面不匹配的行的內容與以前的行的內容合併
       }
    }
}
    
output {
    if [type] == "nc-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "1"
          data_type => "list"
          key => "nc-log"
       }
     }
}

如上調整後,登陸kibana界面,就能夠看到匹配的多行合併的展現效果了(若是多行合併後內容過多,能夠點擊截圖中的小箭頭,點擊進去直接看message信息,這樣就能看到合併多行後的內容了):

=======================================================
從上面的例子中能夠發現,logstash收集的日誌在kibana的展現界面裏出現了中文亂碼。

這就須要在logstash收集日誌的配置中指定編碼。使用"file"命令去查看對應日誌文件的字符編碼:
1)若是命令返回結果說明改日誌爲utf-8,則logstash配置文件中charset設置爲UTF-8。(其實若是命令結果爲utf-8,則默認不用添加charset設置,logstash收集日誌中的中文信息也會正常顯示出來)
2)若是命令返回結果說明改日誌不是utf-8,則logstash配置文件中charset統一設置爲GB2312

具體操做記錄:

[root@elk-client ~]# file /data/nchome/nclogs/master/nc-log.log
/data/nchome/nclogs/master/nc-log.log: ISO-8859 English text, with very long lines, with CRLF, LF line terminators

由上面的file命令查看得知,該日誌文件的字符編碼不是UTF-8,因此在logstash配置文件中將charset統一設置爲GB2312。
根據上面的例子,只須要在redis-input.conf文件中添加對應字符編碼的配置便可,file.conf文件不須要修改。以下:
[root@elk-client nc]# vim redis-input.conf
input {
    file {
       path => "/data/nc-tomcat/logs/catalina.out"
       type => "nc-log"
       start_position => "beginning"
       codec => multiline {
           charset => "GB2312"                    #添加這一行
           pattern => "^\$"              
           negate => true              
           what => "previous"            
       }
    }
}
     
output {
    if [type] == "nc-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "1"
          data_type => "list"
          key => "nc-log"
       }
     }
}

重啓logstash程序,而後登錄kibana,就發現中文能正常顯示了!

=============================================================
logstash收集那些存在"以當天日期爲目錄名"下的日誌,以下:

[root@elk-client ~]# cd /data/yxhome/yx_data/applog
[root@elk-client applog]# ls
20180528 20180529 20180530 20180531 20180601 20180602 20180603 20180604 
[root@elk-client applog]# ls 20180604
cm.log  timsserver.log

/data/yxhome/yx_data/applog下那些以當天日期爲名稱的目錄建立時間是0點
[root@elk-client ~]# ll -d /data/yxhome/yx_data/applog/20180603
drwxr-xr-x 2 root root 4096 6月   3 00:00 /data/yxhome/yx_data/applog/20180603

因爲logstash文件中input->file下的path路徑配置不能跟`date +%Y%m%d`或$(date +%Y%m%d)。
個人作法是:寫個腳本將天天的日誌軟連接到一個固定路徑下,而後logstash文件中的path配置成軟鏈以後的新路徑。
[root@elk-client ~]# vim /mnt/yx_log_line.sh 
#!/bin/bash
/bin/rm -f /mnt/yx_log/*
/bin/ln -s /data/yxhome/yx_data/applog/$(date +%Y%m%d)/cm.log /mnt/yx_log/cm.log
/bin/ln -s /data/yxhome/yx_data/applog/$(date +%Y%m%d)/timsserver.log /mnt/yx_log/timsserver.log

[root@elk-client ~]# chmod +755 /mnt/yx_log_line.sh 
[root@elk-client ~]# /bin/bash -x /mnt/yx_log_line.sh
[root@elk-client ~]# ll /mnt/yx_log
總用量 0
lrwxrwxrwx 1 root root 43 6月   4 14:29 cm.log -> /data/yxhome/yx_data/applog/20180604/cm.log
lrwxrwxrwx 1 root root 51 6月   4 14:29 timsserver.log -> /data/yxhome/yx_data/applog/20180604/timsserver.log

[root@elk-client ~]# crontab -l
0 3 * * * /bin/bash -x /mnt/yx_log_line.sh > /dev/null 2>&1

logstash配置以下(多個log日誌採集的配置放在一個文件裏):
[root@elk-client ~]# cat /opt/redis-input.conf 
input {
    file {
       path => "/data/nchome/nclogs/master/nc-log.log"
       type => "nc-log"
       start_position => "beginning"
       codec => multiline {
           charset => "GB2312"
           pattern => "^\$"          
           negate => true
           what => "previous"
       }
    }

    file {
       path => "/mnt/yx_log/timsserver.log"
       type => "yx-timsserver.log"
       start_position => "beginning"
       codec => multiline {
           charset => "GB2312"
           pattern => "^[0-9]"           #以數字開頭。實際該日誌是以2018日期字樣開頭,好比2018-06-04 09:19:53,364:......  
           negate => true
           what => "previous"
       }
    }

    file {
       path => "/mnt/yx_log/cm.log"
       type => "yx-cm.log"
       start_position => "beginning"
       codec => multiline {
           charset => "GB2312"
           pattern => "^[0-9]"          
           negate => true
           what => "previous"
       }
    }
}
   
output {
    if [type] == "nc-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "2"
          data_type => "list"
          key => "nc-log"
       }
     }

    if [type] == "yx-timsserver.log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "4"
          data_type => "list"
          key => "yx-timsserver.log"
       }
     }

    if [type] == "yx-cm.log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "5"
          data_type => "list"
          key => "yx-cm.log"
       }
     }

}


[root@elk-client ~]# cat /opt/file.conf 
input {
     redis {
        type => "nc-log"
        host => "192.168.10.217"
        port => "6379"
        db => "2"
        data_type => "list"
        key => "nc-log"
     }

     redis {
        type => "yx-timsserver.log"
        host => "192.168.10.217"
        port => "6379"
        db => "4"
        data_type => "list"
        key => "yx-timsserver.log"
     }

      redis {
        type => "yx-cm.log"
        host => "192.168.10.217"
        port => "6379"
        db => "5"
        data_type => "list"
        key => "yx-cm.log"
     }
}
    
    
output {
    if [type] == "nc-log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]
           index => "elk-client(10.0.52.21)-nc-log-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "yx-timsserver.log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]
           index => "elk-client(10.0.52.21)-yx-timsserver.log-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "yx-cm.log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]
           index => "elk-client(10.0.52.21)-yx-cm.log-%{+YYYY.MM.dd}"
        }
    }
}


先檢查配置是否正確
[root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf --configtest
Configuration OK
[root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/file.conf --configtest
Configuration OK
[root@elk-client ~]#

接着啓動
[root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf &
[root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/file.conf &
[root@elk-client ~]# ps -ef|grep logstash

當日志文件中有新信息寫入,訪問elasticsearch的head插件就能看到對應的索引了,而後添加到kibana界面裏便可。

====================ELK收集IDC防火牆日誌======================

1)經過rsyslog將機房防火牆(地址爲10.1.32.105)日誌收集到一臺linux服務器上(好比A服務器)
   rsyslog收集防火牆日誌操做,可參考:http://www.cnblogs.com/kevingrace/p/5570411.html
   
好比收集到A服務器上的日誌路徑爲:
[root@Server-A ~]# cd /data/fw_logs/10.1.32.105/
[root@Server-A 10.1.32.105]# ll
total 127796
-rw------- 1 root root 130855971 Jun 13 16:24 10.1.32.105_2018-06-13.log

因爲rsyslog收集後的日誌會天天產生一個日誌文件,而且以當天日期命名。
能夠編寫腳本,將天天收集的日誌文件軟連接到一個固定名稱的文件上。
[root@Server-A ~]# cat /data/fw_logs/log.sh 
#!/bin/bash
/bin/unlink /data/fw_logs/firewall.log
/bin/ln -s /data/fw_logs/10.1.32.105/10.1.32.105_$(date +%Y-%m-%d).log /data/fw_logs/firewall.log

[root@Server-A ~]# sh /data/fw_logs/log.sh
[root@Server-A ~]# ll /data/fw_logs/firewall.log 
lrwxrwxrwx 1 root root 52 Jun 13 15:17 /data/fw_logs/firewall.log -> /data/fw_logs/10.1.32.105/10.1.32.105_2018-06-13.log

經過crontab定時執行
[root@Server-A ~]# crontab -l
0 1 * * *  /bin/bash -x /data/fw_logs/log.sh >/dev/null 2>&1
0 6 * * *  /bin/bash -x /data/fw_logs/log.sh >/dev/null 2>&1

2)在A服務器上配置logstash
安裝logstash省略(如上)
[root@Server-A ~]# cat /opt/redis-input.conf 
input {
    file {
       path => "/data/fw_logs/firewall.log"
       type => "firewall-log"
       start_position => "beginning"
       codec => multiline {
           pattern => "^[a-zA-Z0-9]|[^ ]+"       
           negate => true       
           what => previous
       }
    }
}
     
output {
    if [type] == "firewall-log"{
       redis {
          host => "192.168.10.217"
          port => "6379"
          db => "5"
          data_type => "list"
          key => "firewall-log"
       }
     }
}


[root@Server-A ~]# cat /opt/file.conf 
input {
     redis {
        type => "firewall-log"
        host => "192.168.10.217"
        port => "6379"
        db => "5"
        data_type => "list"
        key => "firewall-log"
     }
}
     
     
output {
    if [type] == "firewall-log"{
        elasticsearch {
           hosts => ["192.168.10.213:9200"]
           index => "firewall-log-%{+YYYY.MM.dd}"
        }
    }
}

[root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf --configtest
Configuration OK
[root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/file.conf --configtest
Configuration OK
[root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf &
[root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/file.conf &

注意:
logstash配置文件中的index名稱有時不注意的話,會invalid無效。
好比上面"firewall-log-%{+YYYY.MM.dd}"改成"IDC-firewall-log-%{+YYYY.MM.dd}"的話,啓動logstash就會報錯:index name is invalid!

而後登錄kibana界面,將firewall-log日誌添加進去展現便可。
相關文章
相關標籤/搜索