ELK之Elasticsearch、logstash部署及配置

ElasticSearch是一個搜索引擎,用來搜索、分析、存儲日誌;node

Logstash用來採集日誌,把日誌解析爲json格式交給ElasticSearch;linux

Kibana是一個數據可視化組件,把處理後的結果經過web界面展現;git

Beats在這裏是一個輕量級日誌採集器,早期的ELK架構中使用Logstash收集、解析日誌,可是Logstash對內存、cpu、io等資源消耗比較高.相比 Logstash,Beats所佔系統的CPU和內存幾乎能夠忽略不計.github

1.準備環境以及安裝包web

hostname:linux-elk1  ip:10.0.0.22
hostname:linux-elk2  ip:10.0.0.33
# 兩臺機器保持hosts一致,以及關閉防火牆和selinux
cat /etc/hosts
10.0.0.22   linux-elk1
10.0.0.33   linux-elk2
systemctl disable firewalld.service
systemctl disable  NetworkManager
echo "*/5 * * * * /usr/sbin/ntpdate time1.aliyun.com &>/dev/null" >>/var/spool/cron/root
systemctl restart crond.service

cd /usr/local/src
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.0.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.4.0.rpm

ll
total 342448
-rw-r--r-- 1 root root  33211227 May 15  2018 elasticsearch-5.4.0.rpm
-rw-r--r-- 1 root root 167733100 Feb  3 12:13 jdk-8u121-linux-x64.rpm
-rw-r--r-- 1 root root  56266315 May 15  2018 kibana-5.4.0-x86_64.rpm
-rw-r--r-- 1 root root  93448667 May 15  2018 logstash-5.4.0.rpm
# 安裝配置elasticsearch
yum -y install jdk-8u121-linux-x64.rpm elasticsearch-5.4.0.rpm
grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: elk-cluster
node.name: elk-node1
path.data: /data/elkdata
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 10.0.0.22
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.0.0.22", "10.0.0.33"]

mkdir /data/{elkdata,logs}
chown -R elasticsearch.elasticsearch /data

# 打開啓動腳本中的某行/usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity
# 修改最大、最小內存爲3g
grep -vE "^$|#" /etc/elasticsearch/jvm.options
-Xms3g
-Xmx3g
...
systemctl restart elasticsearch.service
tail /data/logs/elk-cluster.log  # 最後幾行有個started時,說明服務已經啓動
訪問http://10.0.0.22:9200/,出現es信息,說明服務正常
在linux上驗證es服務正常的命令:curl http://10.0.0.22:9200/_cluster/health?pretty=true

2.安裝es插件headdocker

wget https://nodejs.org/dist/v8.10.0/node-v8.10.0-linux-x64.tar.xz
tar xf node-v8.10.0-linux-x64.tar.xz
mv node-v8.10.0-linux-x64 /usr/local/node
vim /etc/profile
export NODE_HOME=/usr/local/node
export PATH=$PATH:$NODE_HOME/bin
source /etc/profile
which node
/usr/local/node/bin/node
which npm
/usr/local/node/bin/npm

npm install -g cnpm --registry=https://registry.npm.taobao.org  # 生成了一個cnpm
/usr/local/node/bin/cnpm -> /usr/local/node/lib/node_modules/cnpm/bin/cnpm

npm install -g grunt-cli --registry=https://registry.npm.taobao.org  # 生成了一個grunt
/usr/local/node/bin/grunt -> /usr/local/node/lib/node_modules/grunt-cli/bin/grunt

wget https://github.com/mobz/elasticsearch-head/archive/master.zip
unzip master.zip
cd elasticsearch-head-master/
vim Gruntfile.js
90  connect: {
91          server: {
92                  options: {
93                          hostname: '10.0.0.22',
94                          port: 9100,
95                          base: '.', 
96                          keepalive: true
97                  }
98          }
99  }

vim _site/app.js
4360行將"http://localhost:9200"改成"http://10.0.0.22:9200";
cnpm install
grunt --version
vim /etc/elasticsearch/elasticsearch.yml  # 增長以下兩行
http.cors.enabled: true
http.cors.allow-origin: "*"

systemctl restart elasticsearch
systemctl enable elasticsearch
grunt server &

在elasticsearch 2.x之前的版本能夠經過:npm

/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head來安裝head插件;json

在elasticsearch 5.x以上版本須要經過npm進行安裝bootstrap

docker方式啓動插件(前提是裝好了es)vim

yum -y install docker
systemctl start docker && systemctl enable docker
docker run -p 9100:9100 mobz/elasticsearch-head:5

報錯:Fatal error: Unable to find local grunt.

網上大多解決方法是:npm install grunt --save-dev,這是沒有裝grunt的時候的解決辦法,在上面的步驟中已經安裝了grunt,只是啓動時沒有在項目目錄中,到項目目錄執行該命令grunt server &便可.

3.logstash部署及基本語法

cd /usr/local/src
yum -y install logstash-5.4.0.rpm
# 使用rubydebug方式前臺輸出展現以及測試
/usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug} }'
The stdin plugin is now waiting for input:
00:37:04.711 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
hello
{
    "@timestamp" => 2019-02-03T16:43:48.149Z,
      "@version" => "1",
          "host" => "linux-elk1",
       "message" => "hello"
}

# 測試輸出到文件
/usr/share/logstash/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log"} }'
# 開啓gzip壓縮輸出
/usr/share/logstash/bin/logstash -e 'input { stdin {} } output{ file { path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip => true } }'
# 測試輸出到elasticsearch
/usr/share/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["10.0.0.22:9200"] index => "logstash-test-%{+YYYY.MM.dd}" } }'

systemctl enable logstash
systemctl restart logstash

在刪除數據時,在該界面刪除,切勿在服務器目錄上刪除,由於集羣節點上都有這樣的數據,刪除某一個,可能會致使elasticsearch沒法啓動.

Elasticsearch環境準備-參考博客:http://blog.51cto.com/jinlong/2054787

logstash部署及基本語法-參考博客:http://blog.51cto.com/jinlong/2055024

相關文章
相關標籤/搜索