1、ELK體系結構html
2、系統環境變量前端
【主機信息】node
IP 主機名 操做系統版本 10.10.10.102 console CentOS7.5 10.10.10.103 log1 CentOS7.5
10.10.10.104 log2 CentOS7.5
【軟件包版本信息】linux
elasticsearch-6.4.0.tar.gz logstash-6.4.0.tar.gz kibana-6.4.0-linux-x86_64.tar.gz
node-v8.11.4-linux-x64.tar.gz
elasticsearch-head-master.zip
1. 設置主機名和IP映射nginx
分別在上述三臺機器的/etc/hosts文件中追加以下內容:git
10.10.10.102 console 10.10.10.103 log1 10.10.10.104 log2
2.關於3臺機器的防火牆,並設置開機不啓動github
#關閉防火牆
systemctl stop firewalld
#設置防火牆開機不啓動
systemctl disable firewalld
3.修改3臺機器的系統文件描述符大小正則表達式
vim /etc/security/limits.conf es - nofile 65536
4.增大3臺機器的虛擬內存mmap count配置redis
vim /etc/sysctl.conf
vm.max_map_count = 262144
#使修改生效
sysctl -p
5.在3臺機器上分別新建用戶es和日誌文件目錄npm
useradd es
mkdir /esdata
chown -R es:es /esdata
6.在3臺機器上都安裝JDK1.8
3、Elasticsearch的安裝與配置
1.分別在10.10.10.10二、10.10.10.10三、10.10.10.104機器上新建Elasticsearch安裝目錄並修改屬主用戶和組
mkdir -p /usr/local/elasticsearch-6.4.0 chown -R es:es /usr/local/elasticsearch-6.4.0
2.登陸10.10.10.102機器並切換到es用戶,將elasticsearch-6.4.0.tar.gz解壓到 /usr/local/elasticsearch-6.4.0目錄下
tar -xf /home/es/elasticsearch-6.4.0.tar.gz cp -r * /usr/local/elasticsearch-6.4.0
3.修改配置文件
console配置文件以下:
[es@console config]$ cat /usr/local/elasticsearch-6.4.0/config/elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console #設置集羣的名稱爲console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: console #設置集羣節點名稱爲console node.master: true #設置該節點是否爲主節點,這裏選擇true,其餘2臺機器這裏設置爲false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata #設置數據目錄爲/esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.102 #這裏配置的是console機器的IP,其餘2臺機器分別配置本身的IP network.bind_host: 10.10.10.102 #同上 network.publish_host: 10.10.10.102 #同上 # # Set a custom port for HTTP: # http.port: 9200 #開啓端口 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] #配置自動發現機制,其餘2臺機器也設置這個值 # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 #設置發現的主節點個數爲1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
log1配置文件:
[es@log1 config]$ cat elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: log1 node.master: false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.103 network.bind_host: 10.10.10.103 network.publish_host: 10.10.10.103 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
log2配置文件:
[es@log2 config]$ cat elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: log2 node.master: false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.104 network.bind_host: 10.10.10.104 network.publish_host: 10.10.10.104 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
4.後臺啓動Elasticsearch
/usr/local/elasticsearch-6.4.0/bin/elasticsearch -d
啓動後顯示以下:
5. 安裝ElasticSearch head插件
因爲ElasticSearch的界面展現的是Json文件,不是很友好。咱們能夠經過安裝插件來解決它。
ElasticSearch_head 下載地址:https://github.com/troub1emaker0911/elasticsearch-head
ElasticSearch_head 須要node.js的支持。咱們須要首先安裝node.js
【安裝node.js】
首先切換到root用戶下,將node.js的安裝包上傳到console機器上。
#將node.js解壓到目錄/usr/local/node-v8.11.4
tar -xf node-v8.11.4-linux-x64.tar.xz -C /usr/local/node-v8.11.4
#設置符號連接
ln -s /usr/local/node-v8.11.4/bin/node /usr/local/bin/
ln -s /usr/local/node-v8.11.4/bin/npm /usr/local/bin/
#檢查是否配置成功
node -v
npm -v
【安裝ElasticSearch_head插件】
切換到es用戶,將安裝包上傳到console機器上。
#解壓文件 unzip elasticsearch-head-master.zip
#將文件包移動到目錄/usr/local下 mv elasticsearch-head-master /usr/local
cd /usr/local/elasticsearch-head-master
npm install
#啓動Elasticsearch-head-master
npm run start > /dev/null 2>&1 &
執行上述步驟完成後,在瀏覽器中輸入http://10.10.10.102:9100便可顯示以下界面。
可是這樣集羣健康值是不可用的(截圖中是我已經配置完畢的),咱們須要在console機器的elasticsearch.yml文件中追加以下配置:
vim /usr/local/elasticsearch-6.4.0/config/elasticsearch.yml http.cors.enabled: true http.cors.allow-origin: "*"
而後修改「鏈接」按鈕前的地址,將原來的http://localhost:9200/修改成console的地址,即http://10.10.10.102:9200,而後點擊「鏈接」,此時後面的「集羣健康值」就變成green了。
6. 新建索引
切換到「索引」選項卡,點擊「新建索引」,這裏填寫「索引名稱」爲book.
而後點擊「概覽」,就能夠看到剛纔新建的索引。
注意上圖中的綠色塊。有粗邊框的爲主,細邊框的爲備。
7.安裝插件:中文分詞器ik
elasticsearch-analysis-ik 是一款中文的分詞插件,支持自定義詞庫。項目地址爲:https://github.com/medcl/elasticsearch-analysis-ik
(1)安裝Maven
因爲該項目使用了Maven來管理,源代碼放到github上。因此要先在服務器上面安裝Maven,即可以直接在服務器上面生成項目jar包,部署起來更加方便了。
yum install -y maven
(2)安裝分詞器ik
這裏安裝的版本是6.3.0
git clone https://github.com/medcl/elasticsearch-analysis-ik.git [es@console ~]$ cd elasticsearch-analysis-ik/ [es@console elasticsearch-analysis-ik]$ mvn package
(3)複製和解壓
[es@console elasticsearch-analysis-ik]$ mkdir -p /usr/local/elasticsearch/plugins/ik [es@console elasticsearch-analysis-ik]$ cp target/releases/elasticsearch-analysis-ik-6.3.0.zip /usr/local/elasticsearch/plugins/ik [es@console ~]$ cd /usr/local/elasticsearch/plugins/ik/ [es@console ik]$ unzip -oq elasticsearch-analysis-ik-6.3.0.zip
(4)重啓Elasticsearch
[es@console ik]$ cd /usr/local/elasticsearch/bin/ [es@console bin]$ jps 20221 Jps 14910 Elasticsearch [es@console bin]$ kill -9 14910 [es@console elasticsearch]$ bin/elasticsearch -d
注:在瀏覽器輸入以下地址能夠查看集羣的nodes節點,但結果是json格式,不是很易讀,能夠將其格式化。
http://10.10.10.102:9200/_nodes
4、Logstash的安裝與配置
Logstash 是一款強大的數據處理工具,它能夠實現數據傳輸,格式處理,格式化輸出,還有強大的插件功能,經常使用於日誌處理。
Logstash工做的三個階段:
1.安裝Logstash
#切換到es用戶下,解壓安裝包到指定目錄下
tar -xf logstash-6.4.0.tar.gz -C /usr/local/logstash-6.4.0
至此,Logstash安裝完成
2.Logstash簡介
Logstash是一個開源的、接受來自多種數據源(input)、過濾你想要的數據(filter)、存儲到其餘設備的日誌管理程序。Logstash包含三個基本插件input\filter\output,一個基本的logstash服務必須包含input和output.
Logstash如何工做:
Logstash數據處理有三個階段,input–>filter–>output.input生產數據,filter根據定義的規則修改數據,output將數據輸出到你定義的存儲位置。
Inputs:
數據生產商,包含如下幾個經常使用輸出:
file: 從文件系統中讀取文件,相似使用tail -0F
syslog: syslog服務,監聽在514端口使用RFC3164格式
redis: 從redis服務讀取,使用redis管道和列表。
beats: 一種代理,本身負責收集好數據而後轉發給Logstash,經常使用的如filebeat.
Filters:
filters至關一個加工管道,它會一條一條過濾數據根據你定義的規則,經常使用的filters以下:
grok: 解析無規則的文字並轉化爲有結構的格式。
mutate: 豐富的基礎類型處理,包括類型轉換、字符串處理、字段處理等。
drop: 丟棄一部分events不進行處理,例如: debug events
clone: 負責一個event,這個過程當中能夠添加或刪除字段。
geoip: 添加地理信息(爲前臺kibana圖形化展現使用)
Outputs:
elasticserache elasticserache接收並保存數據,並將數據給kibana前端展現。
output 標準輸出,直接打印在屏幕上。
3.Logstash舉例
bin/logstash -e 'input { stdin { } } output { stdout {} }'
咱們如今能夠在命令行下輸入一些字符,而後咱們將看到logstash的輸出內容:
[es@console logstash-6.4.0]$ bin/logstash -e 'input { stdin { } } output { stdout {} }' hello world Sending Logstash logs to /usr/local/logstash-6.4.0/logs which is now configured via log4j2.properties [2018-09-14T22:33:52,155][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-09-14T22:33:54,402][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"} [2018-09-14T22:34:00,577][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-09-14T22:34:00,931][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3f32496 run>"} The stdin plugin is now waiting for input: [2018-09-14T22:34:01,199][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} { "@version" => "1", "message" => "hello world", "@timestamp" => 2018-09-14T14:34:01.245Z, "host" => "console" } [2018-09-14T22:34:02,693][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
咱們再運行另外一個命令:
bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }'
而後輸入helloworld,查看顯示的內容:
[es@console logstash-6.4.0]$ bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }' helloworld Sending Logstash logs to /usr/local/logstash-6.4.0/logs which is now configured via log4j2.properties [2018-09-12T03:07:33,884][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-09-12T03:07:36,017][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"} [2018-09-12T03:07:43,294][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-09-12T03:07:43,646][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7cefe25 run>"} The stdin plugin is now waiting for input: [2018-09-12T03:07:43,872][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} { "host" => "console", "@version" => "1", "@timestamp" => 2018-09-11T19:07:43.813Z, "message" => "helloworld" } [2018-09-12T03:07:45,292][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
以上示例經過從新設置了叫」stdout」的output(添加了」codec」參數),咱們就能夠改變Logstash的輸出表現。相似的咱們能夠經過在你的配置文件中添加或者修改inputs、outputs、filters,就可使隨意的格式化日誌數據成爲可能,從而訂製更合理的存儲格式爲查詢提供便利。
前面已經說過Logstash必須有一個輸入和一個輸出,上面的例子表示從終端上輸入並輸出到終端。
數據在線程之間以事件的形式流傳。不要叫行,由於Logstash能夠處理多行事件。
input {
# 輸入域,可使用上面提到的幾種輸入方式。stdin{} 表示標準輸入,file{} 表示從文件讀取。
input的各類插件: https://www.elastic.co/guide/en/logstash/current/input-plugins.html
}
output {
#Logstash的功能就是對數據進行加工,上述例子就是Logstash的格式化輸出,固然這是最簡單的。
output的各類插件:https://www.elastic.co/guide/en/logstash/current/output-plugins.html
}
Logstash配置文件和命令:
Logstash的默認配置已經夠咱們使用了,從5.0後使用logstash.yml文件,能夠將一些命令行參數直接寫到YAML文件便可。
–configtest 或 -t 用於測試Logstash的配置語法是否正確,很是好的一個參數。
–log 或 -l Logstash默認輸出日誌到標準輸出,指定Logstash的日誌存放位置
–pipeline-workers 或 -w 指定運行filter和output的pipeline線程數量,使用默認就好。
-f 指定規則文件,能夠將本身的文件放在同一個路徑下,使用-f 便可運行。
一個簡單的Logstash從文件中讀取配置:
vim file.conf #file.conf能夠放在任意位置 input { stdin { } } output { stdout { codec=>rubydebug } } ~ bin/logstash -f /root/conf/file.conf #啓動便可
3. 插件
(1)grok插件
Grok是Logstash最重要的插件,你能夠在grok裏自定義好命名規則,而後在grok參數或者其餘正則表達式中引用它。
官方給出了120個左右默認的模式:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
USERNAME [a-zA-Z0-9._-]+ USER %{USERNAME}
第一行,用普通的正則表達式來定義一個grok表達式;第二行,經過打印賦值格式,用前面定義好的grok表達式來定義裏一個grok表達式。
正則表達式引格式:
%{SYNTAX:SEMANTIC}
SYNTAX:表示你的規則是如何被匹配的,好比3.14將會被NUMBER模式匹配,55.1.1.2將會被IP模式匹配。
SEMANTIC:表示被匹配到的惟一標識符,好比3.14被匹配到了後,SEMANTIC就當是3.14。
匹配到的數據默認是strings類型,固然你也能夠裝換你匹配到的數據,相似這樣:
%{NUMBER:num:int}
當前只支持裝換爲int和float。
例如:
[es@console config]$ more file.conf input { stdin { } } filter { grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } } } output { stdout { codec=>rubydebug } }
而後運行logstash
[es@console logstash-6.4.0]$ bin/logstash -f /usr/local/logstash-6.4.0/config/file.conf
結果以下:
monkey 12.12 beta { "message" => "monkey 12.12 beta", "@version" => "1", "@timestamp" => 2018-09-17T08:18:42.416Z, "host" => "console", "request_time" => 12.12 }
這個咱們就匹配到咱們想要的值了,並將名字命名爲:request_time
在實際生產中爲了方便咱們不可能在配置文件中一行一行的寫表達式,建議把全部的grok表達式統一寫到一個地方,使用patterns_dir選項來引用。
grok { patterns_dir => "/root/conf/nginx" #這是你定義的grok表達式文件 match => { "message" => "%{CDN_FORMAT}" } add_tag => ["CDN"] }
事實上,咱們收集的日誌也有不少不須要的地方,咱們能夠刪除一部分field信息,保留咱們想要的那一部分。
grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } remove_field => [ "request_time" ] overwrite => [ "message" ] } as 12 as { "@timestamp" => 2017-02-08T06:39:07.921Z, "@version" => "1", "host" => "0.0.0.0", "message" => "as 12 as" }
已經沒有request_time這個field啦~
更多關於grok的用戶看官方文檔:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
最重要的一點:我強烈建議每一個人都要使用 Grok Debugger 來調試本身的 grok 表達式。
(2)kv插件
(3)geoip插件
geoip主要是查詢IP地址歸屬地,用來判斷訪問網站的來源地。
[es@console config]$ more file.conf input { stdin { } } filter { grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } } geoip { source => "clientip" fields => [ "ip","city_name","country_name","location" ] } } output { stdout { codec=>rubydebug } }
參考文檔:https://www.cnblogs.com/blogjun/articles/8064646.html
4、Kibana的安裝與配置
Kibana是一個開源的分析與可視化平臺,設計出來用於和Elasticsearch一塊兒使用的。你能夠用kibana搜索、查看、交互存放在Elasticsearch索引裏的數據,
使用各類不一樣的圖表、表格、地圖等kibana可以很輕易地展現高級數據分析與可視化。
Kibana讓咱們理解大量數據變得很容易。它簡單、基於瀏覽器的接口使你能快速建立和分享實時展示Elasticsearch查詢變化的動態儀表盤。
# 簡單來說他具體的工做流程就是 logstash agent 監控並過濾日誌,logstash index將日誌收集在一塊兒交給全文搜索服務ElasticSearch 能夠用ElasticSearch進行自定義搜索 經過Kibana 來結合 自定義搜索進行頁面展現,如上圖。
1.安裝Kibana
#新建安裝目錄
mkdir -p /usr/local/kibana-6.4.0
#解壓安裝包並將解壓後的複製到相應目錄下
tar -xf kibana-6.4.0.tar.gz
#修改安裝目錄的屬主和用戶
cp -r * /root/software/kibana-6.4.0 /usr/local/kibana-6.4.0
2.配置Kibana與啓動
修改kibana的配置文件kibana.yml, 配置後的結果以下:
[root@console config]# more /usr/local/kibana-6.4.0/config/kibana.yml # Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 #配置kibana的端口號 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "10.10.10.102" #配置kibana安裝的主機的IP # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. server.name: "console" # The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://10.10.10.102:9200" #配置Elasticsearch安裝主機的IP地址和端口 # When this setting's value is true Kibana uses the hostname specified in the server.host # setting. When the value of this setting is false, Kibana uses the hostname of the host # that connects to this Kibana instance. #elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. #kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "user" #elasticsearch.password: "pass" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files validate that your Elasticsearch backend uses the same key files. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. #elasticsearch.startupTimeout: 5000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # The default locale. This locale can be used in certain circumstances to substitute any missing # translations. #i18n.defaultLocale: "en" # #
啓動kibana:
cd /usr/local/kibana-6.4.0 ./bin/kibana
成功啓動後,在瀏覽器輸入http://10.10.10.102:5601,界面以下:
以下地址能夠查看kibana的狀態和資源使用狀況:
10.10.10.102:5601/status
3.