ELK安裝部署

1.簡介html

官網地址:https://www.elastic.co/cn/java

官網權威指南:https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.htmlnode

安裝指南:https://www.elastic.co/guide/en/elasticsearch/reference/5.x/rpm.htmllinux

本文word文檔以及安裝包下載連接:web

百度網盤>>
連接:https://pan.baidu.com/s/1KUVq-8o1gjrK-5RCxLSMGw 
提取碼:a14r 

 

ELKElasticsearchLogstashKibana的簡稱,這三者是核心套件,但並不是所有。redis

Elasticsearch是實時全文搜索和分析引擎,提供蒐集、分析、存儲數據三大功能;是一套開放RESTJAVA API等結構提供高效搜索功能,可擴展的分佈式系統。它構建於Apache Lucene搜索引擎庫之上。spring

Logstash是一個用來蒐集、分析、過濾日誌的工具。它支持幾乎任何類型的日誌,包括系統日誌、錯誤日誌和自定義應用程序日誌。它能夠從許多來源接收日誌,這些來源包括 syslog、消息傳遞(例如 RabbitMQ)和JMX,它可以以多種方式輸出數據,包括電子郵件、websocketsElasticsearch數據庫

Kibana是一個基於Web的圖形界面,用於搜索、分析和可視化存儲在 Elasticsearch指標中的日誌數據。它利用ElasticsearchREST接口來檢索數據,不只容許用戶建立他們本身的數據的定製儀表板視圖,還容許他們以特殊的方式查詢和過濾數據npm

Redis REmote DIctionary Server(Redis) 是一個由Salvatore Sanfilippo寫的key-value存儲系統。bootstrap

Redis是一個開源的使用ANSI C語言編寫、遵照BSD協議、支持網絡、可基於內存亦可持久化的日誌型、Key-Value數據庫,並提供多種語言的API

它一般被稱爲數據結構服務器,由於值(value)能夠是 字符串(String), 哈希(Hash), 列表(list), 集合(sets) 和 有序集合(sorted sets)等類型ELK套件中做爲數據臨時存儲的做用。

Filebeat是本地文件的日誌數據採集器。 做爲服務器上的代理安裝,Filebeat監視日誌目錄或特定日誌文件,tail file,並將它們轉發給ElasticsearchLogstash進行索引、kafka 等。

2.本次使用框架

 

 

 3.安裝

  3.1 filebeat安裝   

  安裝包:

filebeat-5.6.0-linux-x86_64.tar.gz

 

tar -xzvf  filebeat-5.6.0-linux-x86_64.tar.gz 
mv  filebeat-5.6.0-linux-x86_64  filebeat
cd filebeat
編輯配置文件--> filebeat.yml
添加收集日誌文件,導入redis
##input
#------------------------------ Log prospector --------------------------------
- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  # To fetch all ".log" files from a specific level of subdirectories
  # /var/log/*/*.log can be used.
  # For each file found under this path, a harvester is started.
  # Make sure not file is defined twice as this can lead to unexpected behaviour.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
##output
#------------------------------- Redis output ----------------------------------
output.redis:
  # Boolean flag to enable or disable the output module.
  enabled: true
  # The list of Redis servers to connect to. If load balancing is enabled, the
  # events are distributed to the servers in the list. If one server becomes
  # unreachable, the events are distributed to the reachable servers only.
  hosts: ["192.168.1.110"]
  # The Redis port to use if hosts does not contain a port number. The default
  # is 6379.
  port: 6379
  # The name of the Redis list or channel the events are published to. The
  # default is filebeat.
  key: filebeat
  # The password to authenticate with. The default is no authentication.
  password: xian123
  # The Redis database number where the events are published. The default is 0.
  db: 0
  # The Redis data type to use for publishing events. If the data type is list,
  # the Redis RPUSH command is used. If the data type is channel, the Redis
  # PUBLISH command is used. The default value is list.
  datatype: list
  # The number of workers to use for each host configured to publish events to
  # Redis. Use this setting along with the loadbalance option. For example, if
  # you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
  # host).
  worker: 1
  # If set to true and multiple hosts or workers are configured, the output
  # plugin load balances published events onto all Redis hosts. If set to false,
  # the output plugin sends all events to only one host (determined at random)
  # and will switch to another host if the currently selected one becomes
  # unreachable. The default value is true.
  loadbalance: true
  # The Redis connection timeout in seconds. The default is 5 seconds.
  timeout: 5s
  # The number of times to retry publishing an event after a publishing failure.
  # After the specified number of retries, the events are typically dropped.
  # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
  # all events are published. Set max_retries to a value less than 0 to retry
  # until all events are published. The default is 3.
  #max_retries: 3

 

redis安裝

安裝包:

redis-5.0.4.tar.gz

 

 

 

tar -xzvf redis-5.0.4.tar.gz
cd redis-5.0.4
Make && make install

Vi redis.conf
設置綁定端口bind ip,密碼requirepass 「密碼」
啓動 cd src 
./redis-server ../redis.conf  &
啓動filebeat
cd /home/elk/filebeat
 ./filebeat -e -c filebeat.yml > /dev/null  &
查看redis是否有日誌

 

 

若有數據表示能夠收集日誌並導入redis,如無數據表示安裝配置有問題;

 

下面是安裝elasticsearch建議內存夠大,由於這個組件極耗內存

 

安裝jdk
tar -xvzf jdk-8u91-linux-x64.tar.gz
ln -s /home/elk/jdk1.8.0_91 /usr/local/jdk
編輯環境變量
vim /etc/profile
JAVA_HOME=/usr/local/jdk
export JRE_HOME=/usr/local/jdk/jre/
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

驗證:
#Java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

安裝elasticsearch
Unzip elasticsearch-5.6.3.zip
mv elasticsearch-5.6.3 elasticsearch
cd elasticsearch/config/
vi elasticsearch.yml 
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: test-server
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /opt/data/
#
#Path to log files:
#
path.logs: /opt/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
#The default list of hosts is ["192.168.1.100",]
#
discovery.zen.ping.unicast.hosts: ["192.168.1.100",]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
http.cors.enabled: true          
http.cors.allow-origin: "*"        
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true



修改系統參數,不修改可能達到系統瓶頸,致使軟件崩潰
vi /etc/sysctl.conf
vm.max_map_count=655360
sysctl –p
 vi /etc/security/limits.conf
*   soft    nofile  65536
*   hard        nofile  131072
*   soft        nproc   65536
*   hard        nproc   131072
Vi /etc/security/limits.d/20-nproc.conf
elk     soft    nproc       65536
啓動組件
cd /home/elk/elasticsearch/bin/
 ./elasticsearch  >> /dev/null &
驗證是否成功
Curl http://192.168.1.100:9200/_search?pretty

接口有數據表示啓動沒問題

 

安裝logstash:

tar -xzvf logstash-5.3.1.tar.gz
建立軟連接
ln -s /home/elk/logstash-5.3.1 /usr/local/logstash
驗證
/usr/local/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
創建配置文件
vi /usr/local/logstash/config/logstash-simple.conf
input { stdin { } }
output {
    stdout { codec=> rubydebug }
}
使用logstash參數-f讀取配置文件進行測試:
/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-simple.conf

 

 

 

 

 

建立配置文件獲取redis日誌的數據:
配置文件以下:
vi /usr/local/logstash/config/redis-spring.conf 
input {
  redis {
    port => "6379"
    host => "192.168.1.110"    ###redis服務器
    data_type => "list"
    password => "xian123"
    type => "log"
    key => "filebeat"
    db => "0"
  }
}
output {
  elasticsearch 
     hosts => "192.168.1.100:9200"   ##
     index => "logstash1-%{+YYYY.MM.dd}"
  }
}                                                                                                                                                               
~  經過配置文件啓動服務查看效果:
/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/redis-spring.conf

 

 

 

 

Redis的數據已經被取完

http://192.168.1.100:9200/_search?pretty   驗證url

 

安裝ES插件:(elasticsearch-head)

須要聯網

xz node-v8.0.0-linux-x64.tar.xz
tar -xvf xz ode-v8.0.0-linux-x64.tar
cd node-v8.0.0-linux-x64
[root@test-server bin]# ln -s /home/elk/node-v8.0.0-linux-x64/bin/node /usr/local/bin/node
[root@test-server bin]# ln -s /home/elk/node-v8.0.0-linux-x64/bin/npm /usr/local/bin/npm

tar -xvjf  phantomjs-2.1.1-linux-x86_64.tar.bz2
 cd /home/elk/phantomjs-2.1.1-linux-x86_64/bin/
ln -s /home/elk/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs

unzip master.zip 
cd elasticsearch-head-master
npm install -g cnpm --registry=https://registry.npm.taobao.org
npm install grunt --save
npm install -g cnpm --registry=https://registry.npm.taobao.org
npm install grunt --save
npm install grunt-contrib-clean --registry=https://registry.npm.taobao.org
npm install grunt-contrib-concat --registry=https://registry.npm.taobao.org
npm install grunt-contrib-watch --registry=https://registry.npm.taobao.org
npm install grunt-contrib-connect --registry=https://registry.npm.taobao.org
npm install grunt-contrib-copy --registry=https://registry.npm.taobao.org
npm install grunt-contrib-jasmine --registry=https://registry.npm.taobao.org

npm  install 
npm run start &
查看端口狀態:(端口默認9100)
netstat –anpt | grep 9100

 

 

查看是否有日誌:

 

安裝kibana

 

tar -xzvf kibana-5.6.8-linux-x86_64.tar.gz

 cd kibana-5.6.8-linux-x86_64

按需修改配置文件

vi config/kibana.yml 

按需修改配置文件

 

 

瀏覽器打開:http://IP:5601

配置獲取Elasticsearch日誌

 

 

 

 

建立完index以後返回看看是否有日誌

 

 

如上圖表示已經獲取到日誌並展現;

相關文章
相關標籤/搜索