elk日誌收集

ELK日誌系統總體架構:

 

結構解讀:

整個架構從左到右,總共分爲5層node

  1. 最左邊的是業務服務器集羣,上面安裝了filebeat作日誌採集,同時把採集的日誌分別發送給多個kafka 服務。
  2. 第二層、數據緩存層,把數據轉存到本地的kafka broker+zookeeper 集羣中。
  3. 第三層、數據轉發層,這個單獨的Logstash節點會實時去kafka broker集羣拉數據,轉發至ES DataNode。
  4. 第四層、數據持久化存儲,ES DataNode 會把收到的數據,寫磁盤,建索引庫。
  5. 第五層、數據檢索,數據展現,ES Master + Kibana 主要 協調 ES集羣,處理數據檢索請求,數據展現。

角色:

Iplinux

角色git

描述github

192.168.11.51web

elasticsearch + kibanaapache

Es集羣+kibananpm

192.168.11.52bootstrap

elasticsearchvim

Es集羣api

192.168.11.104

logstash

數據轉發層

192.168.11.101

zookeeper + kafka

Kafka+zookeeper集羣

192.168.11.102

zookeeper + kafka

Kafka+zookeeper集羣

192.168.11.103

zookeeper + kafka

Kafka+zookeeper集羣

192.168.11.104

filebeat+web服務器集羣

業務層日誌收集


軟件選用:

jdk1.8.0_101

elasticsearch-5.1.2

kibana-5.1.2

kafka_2.11-1.1.0.tgz        

node-v4.4.7-linux-x64

zookeeper-3.4.9

logstash-5.1.2

Filebeat-5.6.9

部署步驟:

1.ES集羣安裝配置;

2.Logstash客戶端配置(直接寫入數據到ES集羣,寫入系統messages日誌);

3.Kafka(zookeeper)集羣配置;

4.Kibana部署;

6.filebeat日誌收集部署

 


 ES集羣安裝                                                                                                                                                    

獲取es軟件包

# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.2.tar.gz

解壓

# tar -xf elasticsearch-5.1.2.tar.gz -C /usr/local

# mv elasticsearch-5.1.2.tar.gz elasticsearch

 

修改配置文件(注意,設置參數的時候「:冒號」後面要有空格!)

#vim /usr/local/elasticsearch/config/elasticsearch.yml(192.168.11.51)

 1 # ---------------------------------- Cluster -----------------------------------
 2 cluster.name: es-cluster   #組播名稱地址
 3 # ------------------------------------ Node ------------------------------------
 4 node.name: node-1      #節點名稱,不能和其餘節點相同
 5 node.master: true       #節點可否被選舉爲master
 6 
 7 node.data: true
 8 # ----------------------------------- Paths ------------------------------------
 9 path.data: /data/es/data   #數據目錄路徑
10 path.logs: /data/es/logs    #日誌路徑
11 # ----------------------------------- Memory -----------------------------------
12 bootstrap.memory_lock: false
13 # ---------------------------------- Network -----------------------------------
14 network.host: 192.168.11.51
15 http.port: 9200
16 transport.tcp.port: 9301
17 # --------------------------------- Discovery ----------------------------------
18 discovery.zen.ping.unicast.hosts: ["192.168.11.52:9300","192.168.11.51:9301"]
19 # ---------------------------------- Gateway -----------------------------------
20 # ---------------------------------- Various -----------------------------------
21 #------------------------------------http---------------------------------------
22 http.cors.enabled: true
23 http.cors.allow-origin: "*"

複製配置文件

# scp elasticsearch.yml root@192.168.11.52:/usr/local/elasticsearch/config

修改配置文件elasticsearch.yml

node.name: node-2  #192.168.11.52

建立相關目錄

# mkdir /data/es/{data,logs} -p

修改內存大小

jvm空間大小因爲elasticsearch5.0默認分配jvm空間大小爲2g,若是須要,修改jvm空間分配

# vi /usr/local/elasticsearch/config/jvm.options 

-Xms1g
-Xmx1g 

修改系統參數

# vi /etc/security/limits.conf

* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 4096 

# vi /etc/sysctl.conf

vm.max_map_count=655360
vm.swappiness = 1

# sysctl -p  使修改生效

啓動(elasticsearch不能在root下啓動)

添加用戶並設置權限

# groupadd elasticsearch

# useradd elasticsearch -g elasticsearch -p 123456

# chown -R elasticsearch:elasticsearch /usr/local/elasticsearch

切換到elasticsearch用戶

# su elasticsearch

# cd /usr/local/elasticsearch/bin

啓動

# ./elasticsearch

查看進程,檢查是否啓動

# netstat -nlpt | grep -E "9200|"9300

啓動成功後訪問192.168.11.51:9200

 

 


 安裝Head插件

Elasticsearch Head Plugin: 對ES進行各類操做,如查詢、刪除、瀏覽索引等。

克隆代碼到本地

# cd /usr/local/elasticsearch

# git clone https://github.com/mobz/elasticsearch-head

切換目錄到 elasticsearch-head,運行 npm 指令,若是系統沒有按照node,還須要安裝node

# npm install

修改配置文件

# vi /usr/local/elasticsearch/elasticsearch-head/Gruntfile.js 

connect: {
            server: {
                options: {
                    port: 9100,
                    hostname:'192.168.11.51',
                    base: '.',
                    keepalive: true
                }
            }
        }

啓動nodehead

# ./node_modules/grunt/bin/grunt server

啓動成功後訪問,輸入http://192.168.11.51:9100/

可看到集羣鏈接信息

 


 安裝node

因爲head插件本質上仍是一個nodejs的工程,所以須要安裝node,使用npm來安裝依賴的包。(npm能夠理解爲maven)

獲取軟件包

# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz

解壓

# tar -zxvf node-v4.4.7-linux-x64.tar.gz

# mv node-v4.4.7-linus-x64 nodejs

設置軟鏈接

# ln -s /usr/local/src/nodejs/bin/node /usr/local/bin/node

 


 

Logstash安裝

獲取logstash軟件包

# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.2.tar.gz

解壓

# tar -xf logstash-5.1.2.tar.gz -C /usr/local

# mv logstash-5.1.2 logstash

配置文件

# cd /usr/local/logstash

# mkdir /usr/local/logstash/etc

# vi logstash.conf

input {
  stdin {}   
}

output {
  elasticsearch {
    hosts => ["192.168.11.51:9200","192.168.11.52:9200"]
  }
}

 

啓動logstash.conf

# /usr/local/logstash/bin/logstash -f logstash.conf

輸入內容

 

輸入http://192.168.11.51:9100,會發現增長了一個日誌文件

 

點擊數據瀏覽,可看到本身輸入的內容

 

此時,logstash已經能夠正常將數據寫到es了

 

配置從kafka 讀取數據到es(此處先不啓動,等kafka安裝部署完成後再啓動校驗)

# vi kafka_to_es.conf

input{
    kafka {
        bootstrap_servers  => ["192.168.11.101:9092,192.168.11.102:9092,192.168.11.103:9092"]
        client_id => "test1"
        group_id => "test1"
        auto_offset_reset => "latest"
        topics => ["msyslog"]
        type => "msyslog"
    }
    kafka {
        bootstrap_servers  => ["192.168.11.101:9092,192.168.11.102:9092,192.168.11.103:9092"]
        client_id => "test2"
        group_id => "test2"
        auto_offset_reset => "latest"
        topics => ["msyserrlog"]
        type => "msyserrlog"
    }
}

filter {
   date {
        match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
        timezone => "+08:00"
    }
}

output {
    elasticsearch {
      hosts => ["192.168.11.51:9200","192.168.11.52:9200"]
      index => "mime-%{type}-%{+YYYY.MM.dd}"
      timeout => 300
    }
}

啓動命令

# /usr/local/logstash/bin/logstash -f kafka_to_es.conf

 


 

zookeeper安裝

下載zookeeper3.4.9

http://archive.apache.org/dist/zookeeper/

解壓

# tar -xf zookeeper-3.4.9.tar.gz -C /usr/local

# mv zookeeper-3.4.9 zookeeper

修改環境變量

# vi /etc/profile

export ZOOKEEPER_HOME=/usr/local/zookeeper

export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf 

生效

# source /etc/profile

建立myid文件

192.168.11.101# echo 11 >/usr/local/zookeeper/data/myid

192.168.11.102# echo 12 >/usr/local/zookeeper/data/myid

192.168.11.103# echo 13 >/usr/local/zookeeper/data/myid

 

編寫配置文件

#vi  /usr/local/zookeeper/conf/zoo.cfg(三臺服務器的配置同樣)

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
clientPort=2181
server.11=192.168.11.101:2888:3888
server.12=192.168.11.102:2888:3888
server.13=192.168.11.103:2888:3888

啓動 zookeeper

bin/zkServer.sh start

查看節點狀態

bin/zkServer.sh status


 

Kafka安裝

獲取軟件包

# wget http://mirrors.shu.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz

安裝,配置 kafka

# tar -xf kafka_2.11-1.1.0.tgz -C /usr/local

# mv kafka_2.11-1.1.0.tgz kafka

編寫配置文件,配置 192.168.11.101 節點

# vi /usr/local/kafka/config/server.properties

############################# Server Basics #############################
broker.id=1
############################# Socket Server Settings #############################
listeners=PLAINTEXT://192.168.11.101:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
############################# Log Basics #############################
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Retention Policy #############################
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
######################## Zookeeper #############################
zookeeper.connect=192.168.11.101:2181,192.168.11.102:2181,192.168.11.103:2181
zookeeper.connection.timeout.ms=6000
########################### Group Coordinator Settings #############################
group.initial.rebalance.delay.ms=0
host.name=99.48.46.101
advertised.host.name=99.48.46.101

修改其餘節點配置文件

scp server.properties 192.168.11.102:/usr/local/kafka/config/

scp server.properties 192.168.11.103:/usr/local/kafka/config/

# 修改 server.properties(192.168.11.102)

broker.id=2
listeners=PLAINTEXT://192.168.11.102:9092
host.name=192.168.11.102
advertised.host.name=192.168.11.102

# 修改 server.properties(192.168.11.103) 

broker.id=3
listeners=PLAINTEXT://192.168.11.103:9092
host.name=192.168.11.103
advertised.host.name=192.168.11.103

4.配置主機名對應IP的解析

#vim /etc/hosts

192.168.11.101 server1

192.168.11.102 server2

192.168.11.103 server3

# 記得同步到其餘兩臺節點

 

啓動服務 

#bin/kafka-server-start.sh config/server.properties   # 其餘兩臺節點啓動方式相同

 

啓動生產者(192.168.11.101)

bin/kafka-console-producer.sh --broker-list  192.168.11.101:9092 --topic logstash

輸入信息

 

啓動消費者(192.168.11.102)

 bin/kafka-console-consumer.sh --bootstrap-server 192.168.11.102:9092 --topic logstash --from-beginning

生產者輸入的信息在消費者中能夠正常輸出,kafka集羣搭建成功

 


 

Kibana安裝

軟件包下載

 wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.2-linux-x86_64.tar.gz

文件配置

# vi /usr/local/kibana/config/kibana.yml

server.port: 5601
server.host: "192.168.11.51"
elasticsearch.url: "http://192.168.11.51:9200"

啓動kibana

# /usr/local/kibana/bin/kibana

 

Kibana建立索引步驟

索引與es中的一致才能顯示日誌

左側菜單 Management--Index Patterns--add New

輸入地址http://192.168.11.51:5601,查看日誌

 


 

安裝filebeat

filebeat能夠直接使用yum安裝:

配置yum源文件:

# vim /etc/yum.repos.d/elastic5.repo

[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

開始安裝

#yum install filebeat

修改配置文件

#vi /etc/filebeat/filebeat.yml

filebeat.prospectors:
- input_type: log
paths:
- /usr/local/src/logs/crawler-web-cell01-node01/*.log

#------------------------------- Kafka output ----------------------------------
output.kafka:
  hosts: ["192.168.11.101:9092","192.168.11.102:9092","192.168.11.103:9092"]
  topic: logstash
  worker: 1
  max_retries: 3

啓動服務

#/etc/init.d/filebeat start

多個topics配置,用於日誌分類

#vi /etc/filebeat/filebeat.yml

filebeat.prospectors:
- input_type: log
  paths:
    - /usr/local/src/logs/sys.log
  fields:
    level: debug
    log_topics: syslog
- input_type: log
  paths:
    - /usr/local/src/logs/sys-err.log
  fields:
    level: debug
    log_topics: syserrlog
#------------------------------- Kafka output ----------------------------------
output.kafka:
  enabled: true
  hosts: ["192.168.11.101:9092","192.168.11.102:9092","192.168.11.103:9092"]
  topic: '%{[fields][log_topics]}'
  worker: 1
  max_retries: 3

建立測試文件

# cd /usr/local/src/logs/

向日志文件輸入數據

# echo "192.168.80.123 - - [19/Oct/2017:13:45:29 +0800] \"GET /mytest/rest/api01/v1.4.0?workIds=10086 HTTP/1.1\" 200 78 \"http://www.baidu.com/s?wd=www\" \"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)\"">> tomcat_test.log

輸入http://192.168.11.51:9100/

此時數據已經能夠經過filebeat收集經過kafka集羣發送到es集羣了

輸入地址http://192.168.11.51:5601,查看日誌

相關文章
相關標籤/搜索