1、介紹java
The Elastic Stack - 它不是一個軟件,而是Elasticsearch,Logstash,Kibana 開源軟件的集合,對外是做爲一個日誌管理系統的開源方案。它能夠從任何來源,任何格式進行日誌搜索,分析獲取數據,並實時進行展現。像盾牌(安全),監護者(警報)和Marvel(監測)同樣爲你的產品提供更多的可能。node
Elasticsearch:搜索,提供分佈式全文搜索引擎python
Logstash: 日誌收集,管理,存儲mysql
Kibana :日誌的過濾web 展現
Filebeat:監控日誌文件、轉發nginx
2、測試環境規劃圖web
環境:ip、主機名按照如上規劃,系統已經 update. 全部主機時間一致。防火牆測試環境已關閉。下面是此次elk學習的部署安裝sql
目的:經過elk 主機收集監控主要server的系統日誌、以及線上應用服務日誌。json
3、Elasticsearch+Logstash+Kibana的安裝(在 elk.test.com 上進行操做)centos
3.1.基礎環境檢查瀏覽器
[root@elk ~]# hostnameelk.test.com [root@elk ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.30.67 elk.test.com192.168.30.99 rsyslog.test.com192.168.30.64 nginx.test.com
3.2.軟件包
[root@elk ~]# cd elk/[root@elk elk]# wget -c https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.3.3/elasticsearch-2.3.3.rpm[root@elk elk]# wget -c https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.2-1.noarch.rpm[root@elk elk]# wget https://download.elastic.co/kibana/kibana/kibana-4.5.1-1.x86_64.rpm[root@elk elk]# wget -c https://download.elastic.co/beats/filebeat/filebeat-1.2.3-x86_64.rpm
3.3.檢查
[root@elk elk]# lselasticsearch-2.3.3.rpm filebeat-1.2.3-x86_64.rpm kibana-4.5.1-1.x86_64.rpm logstash-2.3.2-1.noarch.rpm
服務器只須要安裝e、l、k, 客戶端只須要安裝filebeat。
3.4.安裝elasticsearch,先安裝jdk,elk server 須要java 開發環境支持,因爲客戶端上使用的是filebeat軟件,它不依賴java環境,因此不須要安裝。
[root@elk elk]# yum install java-1.8.0-openjdk -y
安裝es
localinstall elasticsearch-..rpm --.-.noarch / systemctl daemon--.-.noarch /:.-
從新載入 systemd,掃描新的或有變更的單元;啓動並加入開機自啓動
[root@elk elk]# systemctl daemon-reload [root@elk elk]# systemctl enable elasticsearch Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service. [root@elk elk]# systemctl start elasticsearch [root@elk elk]# systemctl status elasticsearch ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2016-05-20 15:38:35 CST; 12s ago Docs: http://www.elastic.co Process: 10428 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS) Main PID: 10430 (java) CGroup: /system.slice/elasticsearch.service └─10430 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancy... May 20 15:38:38 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:38,279][INFO ][env ] [James Howlett] heap...[true] May 20 15:38:38 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:38,279][WARN ][env ] [James Howlett] max ...65536] May 20 15:38:41 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:41,726][INFO ][node ] [James Howlett] initialized May 20 15:38:41 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:41,726][INFO ][node ] [James Howlett] starting ... May 20 15:38:41 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:41,915][INFO ][transport ] [James Howlett] publ...:9300} May 20 15:38:41 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:41,920][INFO ][discovery ] [James Howlett] elas...xx35hw May 20 15:38:45 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:45,099][INFO ][cluster.service ] [James Howlett] new_...eived) May 20 15:38:45 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:45,164][INFO ][gateway ] [James Howlett] reco..._state May 20 15:38:45 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:45,185][INFO ][http ] [James Howlett] publ...:9200} May 20 15:38:45 elk.test.com elasticsearch[10430]: [2016-05-20 15:38:45,185][INFO ][node ] [James Howlett] started Hint: Some lines were ellipsized, use -l to show in full.
檢查服務
[root@elk elk]# rpm -qc elasticsearch/etc/elasticsearch/elasticsearch.yml/etc/elasticsearch/logging.yml/etc/init.d/elasticsearch/etc/sysconfig/elasticsearch/usr/lib/sysctl.d/elasticsearch.conf/usr/lib/systemd/system/elasticsearch.service/usr/lib/tmpfiles.d/elasticsearch.conf [root@elk elk]# netstat -nltp | grep java tcp6 0 0 127.0.0.1:9200 :::* LISTEN 10430/java tcp6 0 0 ::1:9200 :::* LISTEN 10430/java tcp6 0 0 127.0.0.1:9300 :::* LISTEN 10430/java tcp6 0 0 ::1:9300 :::* LISTEN 10430/java
修改防火牆,將9200、9300 端口對外開放
[root@elk elk]# firewall-cmd --permanent --add-port={9200/tcp,9300/tcp} success [root@elk elk]# firewall-cmd --reload success [root@elk elk]# firewall-cmd --list-all public (default, active) interfaces: eno16777984 eno33557248 sources: services: dhcpv6-client ssh ports: 9200/tcp 9300/tcp masquerade: no forward-ports: icmp-blocks: rich rules:
3.5 安裝kibana
[root@elk elk]# yum localinstall kibana-4.5.1-1.x86_64.rpm –y [root@elk elk]# systemctl enable kibana Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service. [root@elk elk]# systemctl start kibana [root@elk elk]# systemctl status kibana ● kibana.service - no description given Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2016-05-20 15:49:02 CST; 20s ago Main PID: 11260 (node) CGroup: /system.slice/kibana.service └─11260 /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/cli May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:elasticsearch...May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:kbn_vi...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:markdo...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:metric...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:spyMod...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:status...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:table_...lized"} May 20 15:49:05 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["listening","info"],"pi...:5601"} May 20 15:49:10 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:10+00:00","tags":["status","plugin:elasticsearch...May 20 15:49:14 elk.test.com kibana[11260]: {"type":"log","@timestamp":"2016-05-20T07:49:14+00:00","tags":["status","plugin:elasti...found"} Hint: Some lines were ellipsized, use -l to show in full.
檢查kibana服務運行(Kibana默認 進程名:node ,端口5601)
[root@elk elk]# netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 909/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1595/master tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 11260/node
修改防火牆,對外開放tcp/5601
[root@elk elk]# firewall-cmd --permanent --add-port=5601/tcp Success [root@elk elk]# firewall-cmd --reload success [root@elk elk]# firewall-cmd --list-all public (default, active) interfaces: eno16777984 eno33557248 sources: services: dhcpv6-client ssh ports: 9200/tcp 9300/tcp 5601/tcp masquerade: no forward-ports: icmp-blocks: rich rules:
這時,咱們能夠打開瀏覽器,測試訪問一下kibana服務器http://192.168.30.67:5601/,確認沒有問題,以下圖:
在這裏,咱們能夠修改防火牆,將用戶訪問80端口鏈接轉發到5601上,這樣能夠直接輸入網址不用指定端口了,以下:
[root@elk elk]# firewall-cmd --permanent --add-forward-port=port=80:proto=tcp:toport=5601[root@elk elk]# firewall-cmd --reload[root@elk elk]# firewall-cmd --list-all public (default, active) interfaces: eno16777984 eno33557248 sources: services: dhcpv6-client ssh ports: 9200/tcp 9300/tcp 5601/tcp masquerade: no forward-ports: port=80:proto=tcp:toport=5601:toaddr= icmp-blocks: rich rules:
3.6 安裝logstash,以及添加配置文件
[root@elk elk]# yum localinstall logstash-2.3.2-1.noarch.rpm –y
生成證書
[root@elk elk]# cd /etc/pki/tls/[root@elk tls]# lscert.pem certs misc openssl.cnf private [root@elk tls]# openssl req -subj '/CN=elk.test.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt Generating a 2048 bit RSA private key ...................................................................+++......................................................+++writing new private key to 'private/logstash-forwarder.key'-----
以後建立logstash 的配置文件。以下:
[root@elk ~]# cat /etc/logstash/conf.d/01-logstash-initial.conf input { beats { port => 5000 type => "logs" ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } filter { if [type] == "syslog-beat" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } geoip { source => "clientip" } syslog_pri {} date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { elasticsearch { } stdout { codec => rubydebug } }
啓動logstash,並檢查端口,配置文件裏,咱們寫的是5000端口
[root@elk conf.d]# systemctl start logstash [root@elk elk]# /sbin/chkconfig logstash on [root@elk conf.d]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 909/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1595/master tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 11260/node tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 618/rsyslogd tcp6 0 0 :::5000 :::* LISTEN 12819/java tcp6 0 0 :::3306 :::* LISTEN 1270/mysqld tcp6 0 0 127.0.0.1:9200 :::* LISTEN 10430/java tcp6 0 0 ::1:9200 :::* LISTEN 10430/java tcp6 0 0 127.0.0.1:9300 :::* LISTEN 10430/java tcp6 0 0 ::1:9300 :::* LISTEN 10430/java tcp6 0 0 :::22 :::* LISTEN 909/sshd tcp6 0 0 ::1:25 :::* LISTEN 1595/master tcp6 0 0 :::514 :::* LISTEN 618/rsyslogd
修改防火牆,將5000端口對外開放。
[root@elk ~]# firewall-cmd --permanent --add-port=5000/tcp success [root@elk ~]# firewall-cmd --reload success [root@elk ~]# firewall-cmd --list-all public (default, active) interfaces: eno16777984 eno33557248 sources: services: dhcpv6-client ssh ports: 9200/tcp 9300/tcp 5000/tcp 5601/tcp masquerade: no forward-ports: port=80:proto=tcp:toport=5601:toaddr= icmp-blocks: rich rules:
3.7 修改elasticsearch 配置文件
查看目錄,建立文件夾es-01(名字不是必須的),logging.yml是自帶的,elasticsearch.yml是建立的文件,內如見下:
[root@elk ~]# cd /etc/elasticsearch/[root@elk elasticsearch]# tree . ├── es-01│ ├── elasticsearch.yml │ └── logging.yml └── scripts
[root@elk elasticsearch]# cat es-01/elasticsearch.yml ----http: port: 9200network: host: elk.test.com node: name: elk.test.com path: data: /etc/elasticsearch/data/es-01
3.8 重啓elasticsearch、logstash服務。
3.9 將 fiebeat安裝包拷貝到 rsyslog、nginx 客戶端上
[root@elk elk]# scp filebeat-1.2.3-x86_64.rpm root@rsyslog.test.com:/root/elk [root@elk elk]# scp filebeat-1.2.3-x86_64.rpm root@nginx.test.com:/root/elk [root@elk elk]# scp /etc/pki/tls/certs/logstash-forwarder.crt rsyslog.test.com:/root/elk [root@elk elk]# scp /etc/pki/tls/certs/logstash-forwarder.crt nginx.test.com:/root/elk
4、客戶端部署filebeat(在rsyslog、nginx客戶端上操做)
filebeat客戶端是一個輕量級的,從服務器上的文件收集日誌資源的工具,這些日誌轉發處處理到Logstash服務器上。該Filebeat客戶端使用安全的Beats協議與Logstash實例通訊。lumberjack協議被設計爲可靠性和低延遲。Filebeat使用託管源數據的計算機的計算資源,而且Beats輸入插件儘可能減小對Logstash的資源需求。
4.1.(node1)安裝filebeat,拷貝證書,建立收集日誌配置文件
[root@rsyslog elk]# yum localinstall filebeat-1.2.3-x86_64.rpm -y #拷貝證書到本機指定目錄中 [root@rsyslog elk]# cp logstash-forwarder.crt /etc/pki/tls/certs/. [root@rsyslog elk]# cd /etc/filebeat/[root@rsyslog filebeat]# tree . ├── conf.d │ ├── authlogs.yml │ └── syslogs.yml ├── filebeat.template.json └── filebeat.yml1 directory, 4 files
修改的文件有3個,filebeat.yml,是定義鏈接logstash 服務器的配置。conf.d目錄下的2個配置文件是自定義監控日誌的,下面看下各自的內容:
filebeat.yml
[root@rsyslog filebeat]# cat filebeat.yml filebeat: spool_size: 1024 idle_timeout: 5s registry_file: .filebeat config_dir: /etc/filebeat/conf.d output: logstash: hosts: - elk.test.com:5000 tls: certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] enabled: trueshipper: {} logging: {} runoptions: {}
authlogs.yml & syslogs.yml
[root@rsyslog filebeat]# cat conf.d/authlogs.yml filebeat: prospectors: - paths: - /var/log/secure encoding: plain fields_under_root: false input_type: log ignore_older: 24h document_type: syslog-beat scan_frequency: 10s harvester_buffer_size: 16384 tail_files: false force_close_files: false backoff: 1s max_backoff: 1s backoff_factor: 2 partial_line_waiting: 5s max_bytes: 10485760[root@rsyslog filebeat]# cat conf.d/syslogs.yml filebeat: prospectors: - paths: - /var/log/messages encoding: plain fields_under_root: false input_type: log ignore_older: 24h document_type: syslog-beat scan_frequency: 10s harvester_buffer_size: 16384 tail_files: false force_close_files: false backoff: 1s max_backoff: 1s backoff_factor: 2 partial_line_waiting: 5s max_bytes: 10485760
修改完成後,啓動filebeat服務
[root@rsyslog filebeat]# service filebeat start Starting filebeat: [ OK ] [root@rsyslog filebeat]# chkconfig filebeat on [root@rsyslog filebeat]# netstat -altp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:25151 *:* LISTEN 6230/python2 tcp 0 0 *:ssh *:* LISTEN 5509/sshd tcp 0 0 localhost:ipp *:* LISTEN 1053/cupsd tcp 0 0 localhost:smtp *:* LISTEN 1188/master tcp 0 0 rsyslog.test.com:51155 elk.test.com:commplex-main ESTABLISHED 7443/filebeat tcp 0 52 rsyslog.test.com:ssh 192.168.30.65:10580 ESTABLISHED 7164/sshd tcp 0 0 *:ssh *:* LISTEN 5509/sshd tcp 0 0 localhost:ipp *:* LISTEN 1053/cupsd tcp 0 0 localhost:smtp *:* LISTEN 1188/master
若是鏈接不上,狀態不正常的話,檢查下客戶端的防火牆。
4.2. (node2)安裝filebeat,拷貝證書,建立收集日誌配置文件
[root@nginx elk]# yum localinstall filebeat-1.2.3-x86_64.rpm -y [root@nginx elk]# cp logstash-forwarder.crt /etc/pki/tls/certs/. [root@nginx elk]# cd /etc/filebeat/[root@nginx filebeat]# tree . ├── conf.d │ ├── nginx.yml │ └── syslogs.yml ├── filebeat.template.json └── filebeat.yml1 directory, 4 files
修改filebeat.yml 內容以下:
[root@rsyslog filebeat]# cat filebeat.yml filebeat: spool_size: 1024 idle_timeout: 5s registry_file: .filebeat config_dir: /etc/filebeat/conf.d output: logstash: hosts: - elk.test.com:5000 tls: certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] enabled: trueshipper: {} logging: {} runoptions: {}
syslogs.yml & nginx.yml
[root@nginx filebeat]# cat conf.d/syslogs.yml filebeat: prospectors: - paths: - /var/log/messages encoding: plain fields_under_root: false input_type: log ignore_older: 24h document_type: syslog-beat scan_frequency: 10s harvester_buffer_size: 16384 tail_files: false force_close_files: false backoff: 1s max_backoff: 1s backoff_factor: 2 partial_line_waiting: 5s max_bytes: 10485760[root@nginx filebeat]# cat conf.d/nginx.yml filebeat: prospectors: - paths: - /var/log/nginx/access.log encoding: plain fields_under_root: false input_type: log ignore_older: 24h document_type: syslog-beat scan_frequency: 10s harvester_buffer_size: 16384 tail_files: false force_close_files: false backoff: 1s max_backoff: 1s backoff_factor: 2 partial_line_waiting: 5s max_bytes: 10485760
修改完成後,啓動filebeat服務,並檢查filebeat進程
[root@nginx filebeat]# service filebeat start Starting filebeat: [ OK ] [root@nginx filebeat]# chkconfig filebeat on [root@nginx filebeat]# netstat -aulpt Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1076/sshd tcp 0 0 localhost:smtp *:* LISTEN 1155/master tcp 0 0 *:http *:* LISTEN 1446/nginx tcp 0 52 nginx.test.com:ssh 192.168.30.65:11690 ESTABLISHED 1313/sshd tcp 0 0 nginx.test.com:49500 elk.test.com:commplex-main ESTABLISHED 1515/filebeat tcp 0 0 nginx.test.com:ssh 192.168.30.65:6215 ESTABLISHED 1196/sshd tcp 0 0 nginx.test.com:ssh 192.168.30.65:6216 ESTABLISHED 1200/sshd tcp 0 0 *:ssh *:* LISTEN 1076/sshd
經過上面能夠看出,客戶端filebeat進程已經和 elk 服務器鏈接了。下面去驗證。
5、驗證,訪問kibana http://192.168.30.67
5.1 設置下
查看下兩臺機器的系統日誌:node1的
node2的nginx 訪問日誌
6、體驗
以前在學習rsyslog +LogAnalyzer,而後又學了這個以後,發現elk 無論從總體系統,仍是體驗都是不錯的,並且更新快。後續會繼續學習,更新相關的監控過濾日誌方法,日誌分析,以及使用kafka 來進行存儲的架構。
本文章屬於原創,若是以爲有價值,轉載時請註明出處。謝謝
參考網站:https://www.elastic.co/products/elasticsearch
https://www.elastic.co/downloads