ELK日誌分析系統之Kibana7.x最新版安裝與配置

三、Kibana的簡介html

Kibana 讓您可以自由地選擇如何呈現本身的數據。Kibana 核心產品搭載了一批經典功能:柱狀圖、線狀圖、餅圖、旭日圖等等。linux

3.一、軟件包下載地址:https://www.elastic.co/cn/downloads/kibanaweb

3.1.1 軟件包tar壓縮包,解壓到/application目錄中,並建立連接文件。express

ln -s /application/kibana-7.3.2-linux-x86_64 /application/kibana

3.二、配置Kibana服務的配置文件/application/kibana/config/kibana.ymlapache

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601   #kibana端口

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.0.0.169"    #綁定的主機IP地址

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://10.0.0.169:9200"] #elasticsearch的主機IP

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"     #開啓此選項

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN . 
i18n.locale: "zh-CN"     #kibana默認文字是英文,變動成中文

3.三、啓動kibana瀏覽器

若是使用root用戶啓動服務,後面必須加--allow-root選項。tomcat

[root@harlan_ansible ~]# /application/kibana/bin/kibana --allow-root

3.四、經過瀏覽器訪問地址:http://10.0.0.169:5601服務器

 

由上述可知,kibana服務安裝和配置成功。app

3.五、經過訪問kibana瀏覽器,咱們能夠直接在須要收集日誌的主機上安裝Beats軟件,不須要在ELK主機上安裝Logstash軟件包。less

經過配置Beats服務的配置文件,來收集不一樣服務的日誌。

3.5.一、例收集客戶機的系統日誌文件

客戶機IP:10.0.0.175

ELK服務器的IP:10.0.0.169

一、下載並安裝 Filebeat
 curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-x86_64.rpm
sudo rpm -vi filebeat-7.3.2-x86_64.rpm
 
步驟 2

編輯配置

修改 /etc/filebeat/filebeat.yml 以設置鏈接信息:

 
       

###################### Filebeat Configuration Example #########################

 
       

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

 
       

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

 
       

#=========================== Filebeat inputs =============================

 
       

filebeat.inputs:

 
       

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

 
       

- type: log

 
       

# Change to true to enable this input configuration.
enabled: true    #開啓,由false變成true

 
       

# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log   #默認系統日誌路徑
- /application/apache-tomcat-8.5.42/logs/*.txt  #手動添加本地tomcat服務的日誌文件
#- c:\programdata\elasticsearch\logs\*

 
       

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']

 
       

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']

 
       

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']

 
       

# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

 
       

### Multiline options

 
       

# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

 
       

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[

 
       

# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false

 
       

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after

 
       


#============================= Filebeat modules ===============================

 
       

filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml

 
       

# Set to true to enable config reloading
reload.enabled: flase

 
       

# Period on which files under path should be checked for changes
#reload.period: 10s

 
       

#==================== Elasticsearch template setting ==========================

 
       

setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

 
       


# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false

 
       

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after

 
       


#============================= Filebeat modules ===============================

 
       

filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml

 
       

# Set to true to enable config reloading
reload.enabled: false

 
       

# Period on which files under path should be checked for changes
#reload.period: 10s

 
       

#==================== Elasticsearch template setting ==========================

 
       

setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

 
       

#================================ General =====================================

 
       

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

 
       

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

 
       

# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging

 
       


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

 
       

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

 
       

#============================== Kibana =====================================

 
       

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

 
       

# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "10.0.0.169:5601"     #添加上kibana的主機IP

 
       

# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:

 
       

#============================= Elastic Cloud ==================================

 
       

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

 
       

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

 
       

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

 
       

#================================ Outputs =====================================

 
       

# Configure what output to use when sending the data collected by the beat.

 
       

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.0.0.169:9200"]      #Elasticsearch主機的IP地址

 
       

# Optional protocol and basic auth credentials.
#protocol: "https"
username: "es"                  #啓動elasticsearch的用戶
password: "harlan-herine"       #啓動elasticsearch的用戶的密碼

 
       

#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]

 
       

# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

 
       

# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

 
       

# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

 
       

#================================ Processors =====================================

 
       

# Configure processors to enhance or manipulate events generated by the beat.

 
       

processors:
- add_host_metadata: ~
- add_cloud_metadata: ~

 
       

#================================ Logging =====================================

 
       

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

 
       

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

 
       

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.

 
       

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

 
       

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

 
       

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

 
       

#================================= Migration ==================================

 
       

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

 

其中,<password> 是 elastic 用戶的密碼,<es_url> 是 Elasticsearch 的 URL,<kibana_url> 是 Kibana 的 URL。

步驟 3

啓用和配置 system 模塊

sudo filebeat modules enable system

在 /etc/filebeat/modules.d/system.yml 文件中修改設置。

 
步驟 4

啓動 Filebeat

setup 命令加載 Kibana 儀表板。若是儀表板已設置,請省略此命令。

sudo filebeat setup
sudo systemctl start filebeat.service
 
未完成步驟 

模塊狀態

確認已從 Filebeat system 模塊成功收到數據 


完成全部步驟後,您即可以隨時瀏覽本身的數據。

 3.六、在kibana服務中,能夠根據收集不一樣的日誌,在不一樣的主機上安裝rpm包。具體詳情以下:
相關文章
相關標籤/搜索