多個獨立的Agent(Shipper)負責收集不一樣來源的數據,一箇中心Agent(Indexer)負責彙總和分析數據,在中心Agent前的Broker(使用Redis實現)做爲緩衝區,中心Agent後的ElasticSearch用於存儲和搜索數據,前端的Kibana提供豐富的圖表展現。html
Shipper表示日誌收集,使用LogStash收集各類來源的日誌數據,能夠是系統日誌、文件、Redis、mq等等;前端
Broker做爲遠程Agent與中心Agent之間的緩衝區,使用Redis實現,一是能夠提升系統的性能,二是能夠提升系統的可靠性,當中心Agent提取數據失敗時,數據保存在Redis中,而不至於丟失;java
中心Agent(Indexer)也是LogStash,從Broker中提取數據,能夠執行相關的分析和處理(Filter);node
ElasticSearch用於存儲最終的數據,並提供搜索功能;mysql
Kibana提供一個簡單、豐富的Web界面,數據來自於ElasticSearch,支持各類查詢、統計和展現linux
系統ios |
IPnginx |
配置git |
CentOS 7github |
192.168.18.171 |
Logstash |
CentOS 6.5 |
192.168.18.186 |
ES+Kibana |
(Logstash部署在IP爲192.168.18.171的機器上。)
input|decode|filter|encode|output
若是是在不一樣機器上安裝,則須要像Logstash的步驟1同樣配置好Java環境。
(本文在不一樣機器上部署,如下配置在IP爲192.168.123.3的機器上進行。)
1.安裝Java環境
[root@hxy ~]# yum install java-1.8.0-openjdk
2.下載並安裝GPG key
[root@hxy ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
3.yum源配置
[root@hxy ~]# cat >/etc/yum.repos.d/elasticsearch.repo<<EOF
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
4.安裝ElasticSearch
[root@hxy ~]# yum install elasticsearch -y
5.修改內核參數 limits.conf
須要修改幾個參數,否則啓動會報錯
vim /etc/security/limits.conf
在末尾追加如下內容(*爲啓動用戶,固然也能夠指定爲某個用戶名)
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 2048
* soft memlock unlimited
* hard memlock unlimited
繼續再修改一個參數
vim /etc/security/limits.d/90-nproc.conf
將裏面的1024改成2048(ES最少要求爲2048)
* soft nproc 2048
注:這些是須要重啓後生效的,若是啓動報錯,能夠試着重啓下虛擬機
6.建立目錄並受權
[root@hxy ~]# mkdir -p /data/es-data
[root@hxy ~]# chown -R elasticsearch.elasticsearch /data/es-data/
7.配置elasticsearch.yml
[root@localhost bin]#vim /etc/elasticsearch/elasticsearch.yml
cluster.name: demon # 集羣的名稱
node.name: elk-1 # 節點的名稱
path.data: /data/es-data # 數據存儲的目錄(多個目錄使用逗號分隔)
path.logs: /var/log/elasticsearch # 日誌路徑
bootstrap.memory_lock: false # 鎖住內存,使內存不會分配至交換區(swap)(個人是關閉的,true的話es會沒法啓動,centos也沒有日誌或者是報這個錯memory locking requested for elasticsearch process but memory is not locked,這個問題我查了好長時間才發現的)
bootstrap.system_call_filter: false #(這是在由於Centos6不支持SecComp,而ES5.2.0默認bootstrap.system_call_filter爲true進行檢測,因此致使檢測失敗,失敗後直接致使ES不能啓動)
network.host: 192.168.18.186 # 本機IP地址
http.port: 9200 # 端口默認9200
http.cors.allow-origin: "*"
#查看配置文件
[root@hxy ~]# grep -Ev "^#|^$" /etc/elasticsearch/elasticsearch.yml
path.data: /data/es-data
path.logs: /var/log/elasticsearch/
bootstrap.system_call_filter: false
http.port: 9200
http.cors.allow-origin: "*"
8.配置java虛擬機內存
把2g改成512m(系統默認是2g,咱們作實驗,虛擬機內存達不到2g會報錯)
vim /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g
#改成
-Xms512m
-Xmx512m
9.啓動ElasticSearch
[root@hxy ~]# /etc/init.d/elasticsearch restart
Stopping elasticsearch: [FAILED]
Starting elasticsearch: [ OK ]
10.檢查啓動
查看進程
[root@hxy ~]# ps -ef|grep ela
496 2458 1 7 14:49 ? 00:00:46 /usr/bin/java -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid -d -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch
root 2835 1774 0 14:59 pts/0 00:00:00 grep ela
查看端口
[root@hxy ~]# netstat -natp |grep 9200
tcp 0 0 :::9200 :::* LISTEN 2458/java
11.訪問測試(經過瀏覽器請求下9200的端口,看下是否成功)
#Linux下訪問:
[root@hxy ~]# curl http://127.0.0.1:9200/
{
"name" : "elk-1",
"cluster_name" : "demon",
"cluster_uuid" : "0oT4R0FgSNuymd7KrAF8tw",
"version" : {
"number" : "5.6.8",
"build_hash" : "688ecce",
"build_date" : "2018-02-16T16:46:30.010Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
12.windows下訪問:
13.如何和elasticsearch交互
JavaAPI
RESTful API
Javascript,.Net,PHP,Perl,Python
利用API查看狀態
[root@hxy ~]# curl -i -XGET 'localhost:9200/_count?pretty'
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
安裝elasticsearch-head插件
Elasticsearch Head Plugin: 對ES進行各類操做,如查詢、刪除、瀏覽索引等。
安裝elasticsearch-head插件
安裝docker鏡像或者經過github下載elasticsearch-head項目都是能夠的,1或者2兩種方式選擇一種安裝使用便可
1. 使用docker的集成好的elasticsearch-head
# docker run -p 9100:9100 mobz/elasticsearch-head:5
docker容器下載成功並啓動之後,運行瀏覽器打開http://localhost:9100/
2. 使用git安裝elasticsearch-head
# yum install -y npm
# git clone git://github.com/mobz/elasticsearch-head.git
# cd elasticsearch-head
# npm install
# npm run start
檢查端口是否起來
netstat -antp |grep 9100
瀏覽器訪問測試是否正常
http://IP:9100/
1.安裝logstash
官方安裝手冊:
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
下載yum源的密鑰認證:
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
利用yum安裝logstash
# yum install -y logstash
查看下logstash的安裝目錄
# rpm -ql logstash
建立一個軟鏈接,每次執行命令的時候不用在寫安裝路勁(默認安裝在/usr/share下)
ln -s /usr/share/logstash/bin/logstash /bin/
執行logstash的命令
# logstash -e 'input { stdin { } } output { stdout {} }'
運行成功之後輸入:
nihao
stdout返回的結果:
將日誌存儲到ES中的配置:
注:
-e 執行操做
input 標準輸入
{ input } 插件
output 標準輸出
{ stdout } 插件
經過rubydebug來輸出下更詳細的信息
# logstash -e 'input { stdin { } } output { stdout {codec => rubydebug} }'
執行成功輸入:
nihao
stdout輸出的結果:
6. 運行測試
若是標準輸出還有elasticsearch中都須要保留應該怎麼玩,看下面
[root@hxy conf.d]# /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["192.168.18.186:9200"] } stdout { codec => rubydebug }}'
運行成功之後輸入:
hello
太慢了
返回的結果(標準輸出中的結果):
7.logstash使用配置文件:
https://www.elastic.co/guide/en/logstash/current/configuration.html
建立配置文件01-logstash.conf
這樣是指定文件啓動,結果同樣的
# vim /etc/logstash/conf.d/test.conf
文件中添加如下內容
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.18.186:9200"] }
stdout { codec => rubydebug }
}
使用配置文件運行logstash
# logstash -f ./test.conf
運行成功之後輸入以及標準輸出結果
logstash的數據庫類型
1. Input插件
權威指南:https://www.elastic.co/guide/en/logstash/current/input-plugins.html
file插件的使用
# vim /etc/logstash/conf.d/elk.conf
[root@hxy ~]# cat /etc/logstash/conf.d/elk.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "system-%{+YYY.MM.dd}"
}
}
運行logstash指定elk.conf配置文件,進行過濾匹配
#注:若是發現配置文件錯誤的話,最好本身手動的去輸入,不要複製,應爲你不知道錯誤在哪裏,我這個配置文件就是應爲複製的時候錯了,查了好半天也沒找到緣由,結果本身手動輸入就行了,因此不要偷懶
#logstash -f /etc/logstash/conf.d/elk.conf
[root@hxy conf.d]# logstash -f /etc/logstash/conf.d/elk.conf
配置安全日誌的而且把日誌的索引按類型作存放,繼續編輯elk.conf文件
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "zabbix-system-%{+YYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "zabbix-secure-%{+YYY.MM.dd}"
}
}
}
logstaash安裝完成
這些設置都沒有問題以後,接下來安裝下kibana,可讓在前臺展現
Kibana的安裝及使用
安裝kibana環境
官方安裝手冊:https://www.elastic.co/guide/en/kibana/current/install.html
下載kibana的tar.gz的軟件包
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.0-linux-x86_64.tar.gz
解壓kibana的tar包
# tar -xzf kibana-5.4.0-linux-x86_64.tar.gz
進入解壓好的kibana
# mv kibana-5.4.0-linux-x86_64 /usr/local
建立kibana的軟鏈接
# ln -s /usr/local/kibana-5.4.0-linux-x86_64/ /usr/local/kibana
編輯kibana的配置文件
# vim /usr/local/kibana/config/kibana.yml
修改配置文件以下,開啓如下的配置
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.8.186:9200"
kibana.index: ".kibana"
安裝screen,以便於kibana在後臺運行(固然也能夠不用安裝,用其餘方式進行後臺啓動)
# yum -y install screen
# screen
[root@hxy ~]# grep -Ev '^$|^#' /usr/local/kibana/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.18.186:9200"
kibana.index: ".kibana"
# /usr/local/kibana/bin/kibana
netstat -antp |grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 37134/node
打開瀏覽器並設置對應的index
http://192.168.18.186:5601
名字本身寫上去就OK了
好,如今索引也能夠建立了,如今能夠來輸出nginx、apache、message、secrue的日誌到前臺展現1.Nginx有的話直接修改,沒有自行安裝
編輯nginx配置文件,修改如下內容(在http模塊下添加)
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
修改access_log的輸出格式爲剛纔定義的json
access_log logs/elk.access.log json;
編輯logstash配置文件,進行日誌收集
vim /etc/logstash/conf.d/full.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/nginx/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.18.186:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
在頁面上查看輸入結果,
2. 在centos7安裝完成logstash(安裝方法和6.5同樣的),apche有的話直接修改,沒有自行安裝
配置apache
修改apache的配置文件
vim /etc/httpd/conf/httpd.conf
LogFormat "{ \
\"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
\"@version\": \"1\", \
\"tags\":[\"apache\"], \
\"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
\"clientip\": \"%a\", \
\"duration\": %D, \
\"status\": %>s, \
\"request\": \"%U%q\", \
\"urlpath\": \"%U\", \
\"urlquery\": \"%q\", \
\"bytes\": %B, \
\"method\": \"%m\", \
\"site\": \"%{Host}i\", \
\"referer\": \"%{Referer}i\", \
\"useragent\": \"%{User-agent}i\" \
}" ls_apache_json
同樣修改輸出格式爲上面定義的json格式
CustomLog logs/access_log ls_apache_json
重啓apache
httpd
啓動logstash
logstash -f /etc/logstash/conf.d/apa.conf
注:因爲個人centos7是新裝的,因此防火牆沒有關閉,我這裏須要關閉防火牆
systemctl stop firewalld.service
到頁面上查看就有結果了
能夠發現全部建立日誌的索引都已存在,接下來就去Kibana建立日誌索引,進行展現(按照上面的方法進行建立索引便可),看下展現的效果
Redis的簡單使用方法
https://www.cnblogs.com/idiotgroup/p/5575236.html
下面的我都還沒作或是沒作成功,而是從原博客上直接複製過來的,就不說了,感興趣的,能夠繼續往下看
接下來再來一發MySQL慢日誌的展現
因爲MySQL的慢日誌查詢格式比較特殊,因此須要用正則進行匹配,並使用multiline可以進行多行匹配(看具體配置)
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/mysql/mysql.slow.log"
type => "mysql"
start_position => "beginning"
codec => multiline {
pattern => "^# User@Host:"
negate => true
what => "previous"
}
}
}
filter {
grok {
match => { "message" => "SELECT SLEEP" }
add_tag => [ "sleep_drop" ]
tag_on_failure => []
}
if "sleep_drop" in [tags] {
drop {}
}
grok {
match => { "message" => "(?m)^# User@Host: %{USER:User}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:Client_IP})?\]\s.*# Query_time: %{NUMBER:Query_Time:float}\s+Lock_time: %{NUMBER:Lock_Time:float}\s+Rows_sent: %{NUMBER:Rows_Sent:int}\s+Rows_examined: %{NUMBER:Rows_Examined:int}\s*(?:use %{DATA:Database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<Query>(?<Action>\w+)\s+.*)\n# Time:.*$" }
}
date {
match => [ "timestamp", "UNIX" ]
remove_field => [ "timestamp" ]
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "mysql" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-mysql-slow-%{+YYYY.MM.dd}"
}
}
}
查看效果(一條慢日誌查詢會顯示一條,若是不進行正則匹配,那麼一行就會顯示一條)
具體的日誌輸出需求,進行具體的分析
安裝reids
# yum install -y redis
修改redis的配置文件
# vim /etc/redis.conf
修改內容以下
daemonize yes
bind 192.168.1.202
啓動redis服務
# /etc/init.d/redis restart
測試redis的是否啓用成功
# redis-cli -h 192.168.1.202
輸入info若是有不報錯便可
redis 192.168.1.202:6379> info
redis_version:2.4.10
....
編輯配置redis-out.conf配置文件,把標準輸入的數據存儲到redis中
# vim /etc/logstash/conf.d/redis-out.conf
添加以下內容
input {
stdin {}
}
output {
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
}
}
運行logstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功之後,在logstash中輸入內容(查看下效果)
編輯配置redis-in.conf配置文件,把reids的存儲的數據輸出到elasticsearch中
# vim /etc/logstash/conf.d/redis-out.conf
添加以下內容
input{
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
batch_count => 1 #這個值是指從隊列中讀取數據時,一次性取出多少條,默認125條(若是redis中沒有125條,就會報錯,因此在測試期間加上這個值)
}
}
output {
elasticsearch {
hosts => ['192.168.1.202:9200']
index => 'redis-test-%{+YYYY.MM.dd}'
}
}
運行logstash指定redis-in.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把以前的配置文件修改一下,變成全部的日誌監控的來源文件都存放到redis中,而後經過redis在輸出到elasticsearch中
更改成以下,編輯full.conf
input {
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
if [type] == "http" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
}
}
if [type] == "nginx" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
}
}
if [type] == "secure" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
}
}
if [type] == "system" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
}
}
}
運行logstash指定shipper.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/full.conf
在redis中查看是否已經將數據寫到裏面(有時候輸入的日誌文件不產生日誌,會致使redis裏面也沒有寫入日誌)
把redis中的數據讀取出來,寫入到elasticsearch中(須要另一臺主機作實驗)
編輯配置文件
# vim /etc/logstash/conf.d/redis-out.conf
添加以下內容
input {
redis {
type => "system"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
batch_count => 1
}
redis {
type => "http"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
batch_count => 1
}
redis {
type => "nginx"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
batch_count => 1
}
redis {
type => "secure"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
batch_count => 1
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
}
注意:
input是從客戶端收集的
output是一樣也保存到192.168.1.202中的elasticsearch中,若是要保存到當前的主機上,能夠把output中的hosts修改爲localhost,若是還須要在kibana中顯示,須要在本機上部署kabana,爲什麼要這樣作,起到一個鬆耦合的目的
說白了,就是在客戶端收集日誌,寫到服務端的redis裏或是本地的redis裏面,輸出的時候對接ES服務器便可
運行命令看看效果
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服務器輸出同樣的(這樣是先將日誌存到redis數據庫,而後再從redis數據庫裏取出日誌)
1. 日誌分類
系統日誌 rsyslog logstash syslog插件
訪問日誌 nginx logstash codec json
錯誤日誌 file logstash mulitline
運行日誌 file logstash codec json
設備日誌 syslog logstash syslog插件
Debug日誌 file logstash json 或者 mulitline
2. 日誌標準化
路徑 固定
格式 儘可能json
3. 系統個日誌開始-->錯誤日誌-->運行日誌-->訪問日誌
由於ES保存日誌是永久保存,因此須要按期刪除一下日誌,下面命令爲刪除指定時間前的日誌
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`
原文來自
最後再加上安裝使用過程當中的問題及解決方法:
1.memory locking requested for elasticsearch process but memory is not locked
[1]: memory locking requested for elasticsearch process but memory is not locked
[2]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2018-04-16T16:50:25,427][INFO ][o.e.n.Node ] [elk-1] stopping ...
[2018-04-16T16:50:25,457][INFO ][o.e.n.Node ] [elk-1] stopped
[2018-04-16T16:50:25,457][INFO ][o.e.n.Node ] [elk-1] closing ...
[2018-04-16T16:50:25,481][INFO ][o.e.n.Node ] [elk-1] closed
若是你遇到上面的錯誤,說明你還須要配置/etc/security/limits.conf
增長下面行到文件末尾.*表示全部用戶
* soft memlock unlimited
* hard memlock unlimited
2.system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
解決:
Centos6不支持SecComp,而ES5.2.0默認bootstrap.system_call_filter爲true
禁用:在elasticsearch.yml中配置bootstrap.system_call_filter爲false,注意要在Memory下面:
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
2.1無法分配內存
[2018-04-16T16:50:02,348][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=沒法分配內存
[2018-04-16T16:50:02,348][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-04-16T16:50:02,348][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-04-16T16:50:02,349][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
Unable to lock JVM Memory: error=12, reason=沒法分配內存
解決方案:
vim /etc/security/limits.conf //添加
* soft memlock unlimited
* hard memlock unlimited
3.max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
解決方案:
vim /etc/sysctl.conf //添加
fs.file-max = 1645037
vm.max_map_count=655360
4.max number of threads [1024] for user [es] likely too low, increase to at least [2048]
緣由:沒法建立本地線程問題,用戶最大可建立線程數過小
解決方案:切換到root用戶,進入limits.d目錄下,修改90-nproc.conf 配置文件。
vi /etc/security/limits.d/90-nproc.conf
找到以下內容:
* soft nproc 1024
#修改成
* soft nproc 2048
5.max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
緣由:最大虛擬內存過小
解決方案:切換到root用戶下,修改配置文件sysctl.conf
vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
並執行命令:
sysctl -p
而後從新啓動elasticsearch,便可啓動成功。
6.ElasticSearch啓動找不到主機或路由
緣由:ElasticSearch 單播配置有問題
解決方案:
檢查ElasticSearch中的配置文件
vi config/elasticsearch.yml
找到以下配置:
discovery.zen.ping.unicast.hosts:[「192.168.**.**:9300″,」192.168.**.**:9300」]
通常狀況下,是這裏配置有問題,注意書寫格式
7.org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
緣由:ElasticSearch節點之間的jdk版本不一致
解決方案:ElasticSearch集羣統一jdk環境
8.Unsupported major.minor version 52.0
緣由:jdk版本問題過低
解決方案:更換jdk版本,ElasticSearch5.0.0支持jdk1.8.0
9.bin/elasticsearch-plugin install license
ERROR: Unknown plugin license
緣由:ElasticSearch5.0.0之後插件命令已經改變
解決方案:使用最新命令安裝全部插件
bin/elasticsearch-plugin install x-pack
基本全部新安裝elk的朋友都遇到過相似問題,這裏從網上搜索了資料,彙總的很是不錯,這裏記錄下。原文來自http://www.dajiangtai.com/community/18136.do?origin=csdn-geek&dt=1214。特此說明。
10.啓動 elasticsearch 如出現異常 can not run elasticsearch as root
解決方法:建立ES 帳戶,修改文件夾 文件 所屬用戶 組
11.啓動異常:ERROR: bootstrap checks failed
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
問題緣由:由於Centos6不支持SecComp,而ES5.2.1默認bootstrap.system_call_filter爲true進行檢測,因此致使檢測失敗,失敗後直接致使ES不能啓動。詳見 :https://github.com/elastic/elasticsearch/issues/22899
解決方法:在elasticsearch.yml中配置bootstrap.system_call_filter爲false,注意要在Memory下面:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
12.啓動後,若是隻有本地能夠訪問,嘗試修改配置文件 elasticsearch.yml
中network.host(注意配置文件格式不是以 # 開頭的要空一格, : 後要空一格)
爲 network.host: 0.0.0.0
默認端口是 9200
注意:關閉防火牆 或者開放9200端口
13.ERROR: bootstrap checks failed
max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
max number of threads [1024] for user [lishang] likely too low, increase to at least [2048]
解決方法:切換到root用戶,編輯limits.conf 添加相似以下內容
vi /etc/security/limits.conf
添加以下內容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
14.max number of threads [1024] for user [lish] likely too low, increase to at least [2048]
解決:切換到root用戶,進入limits.d目錄下修改配置文件。
vi /etc/security/limits.d/90-nproc.conf
修改以下內容:
* soft nproc 1024
#修改成
* soft nproc 2048
15.max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
解決:切換到root用戶修改配置sysctl.conf
vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
並執行命令:
sysctl -p
而後,從新啓動elasticsearch,便可啓動成功
16.安裝npm報錯了
npm ERR! Error: CERT_UNTRUSTED
SSH 使用錯誤,其實咱們關掉HTTPS就行了
npm config set strict-ssl fals
或者
npm config set registry=」http://registry.npmjs.org/」
我用第一種方法就行了,第二個方法我還沒試
npm http 304 https://registry.npmjs.org/core-util-is/1.0.2
18:
> phantomjs-prebuilt@2.1.16 install /data/package/elasticsearch-head/node_modules/grunt-contrib-jasmine/node_modules/grunt-lib-phantomjs/node_modules/phantomjs-prebuilt
> node install.js
npm http 304 https://registry.npmjs.org/core-util-is/1.0.2
> phantomjs-prebuilt@2.1.16 install /data/package/elasticsearch-head/node_modules/grunt-contrib-jasmine/node_modules/grunt-lib-phantomjs/node_modules/phantomjs-prebuilt
> node install.js
/data/package/elasticsearch-head/node_modules/grunt-contrib-jasmine/node_modules/grunt-lib-phantomjs/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/hawk/node_modules/boom/lib/index.js:5
const Hoek = require('hoek');
^^^^^
SyntaxError: Use of const in strict mode.
at Module._compile (module.js:439:25)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/data/package/elasticsearch-head/node_modules/grunt-contrib-jasmine/node_modules/grunt-lib-phantomjs/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/hawk/lib/index.js:5:33)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
npm ERR! weird error 8
npm ERR! not ok code 0
SyntaxError: Use of const in strict mode.
在網上找了一篇帖子,試了一下,能夠了
1) Clear NPM's cache:
sudo npm cache clean -f
2) Install a little helper called 'n'
sudo npm install -g n
3) Install latest stable NodeJS version
sudo n stable
Update nodejs instructions taken from, SyntaxError: Use of const in strict mode
我虛擬機重啓了,npm start就運行不起來了,一些常見的辦法都啓動不了
Logstash報錯
查看下報錯日誌找到了下面這條
Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "
這樣是你的conf配置有問題,好好地檢查一下,個人問題是IP配置錯了