ELK7日誌分析系統基礎(二)
版本區別
- ELK6: 默認對外開放訪問,須要xpack之類的插件才能開啓認證
- ELK7: 默認開啓安全驗證功能
基本環境需求
- centos7
- 關閉防火牆
- 關閉selinux
- 時間同步
- 磁盤分區
- yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel ntpdate ntpdate time.windows.com
- java環境安裝
- 下載地址: https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html - yum安裝: yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel - java環境的驗證: java -version
- ELK下載準備
https://www.elastic.co/guide/en/elasticsearch/reference/7.6/index.html
ELK7的基礎概念
- Elasticsearch: 搜索數據庫服務器,提供restful web接口,簡稱ES
- Logstash:數據採集和過濾分析以及字段提取
- kibana: 主要是頁面展現,ES操做簡化等
ElasticSearch集羣部署與使用實戰
ES數據庫單節點部署
官方安裝文檔參考: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/install-elasticsearch.htmlhtml
- 基礎環境準備好
- elasticsearch rpm下載與安裝
[root@centos7-node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm [root@centos7-node1 ~]# yum -y localinstall elasticsearch-7.6.2-x86_64.rpm
- jvm配置文件修改(默認要填寫本機內存的60%)
[root@centos7-node1 ~]# vim /etc/elasticsearch/jvm.options -Xms200M -Xmx200M
- ES單實例配置與啓動(特別要注意)
[root@centos7-node1 ~]# cp /etc/elasticsearch/elasticsearch.yml /usr/local/src/ [root@centos7-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.56.11 http.port: 9200 xpack.security.enabled: true discovery.type: single-node [root@centos7-node1 ~]# systemctl restart elasticsearch # 服務啓動
- 訪問(須要密碼)
- 設置密碼(試驗環境: elastic)
[root@centos7-node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive Enter password for [elastic]: elastic Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
- 訪問
- 驗證是否啓動成功
[root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200 { "name" : "centos7-node1", "cluster_name" : "elasticsearch", "cluster_uuid" : "WUehKqv3TyudTo_IKMNNlA", "version" : { "number" : "7.6.2", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200/_cat/nodes?v #查看節點信息 ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.56.11 56 89 0 0.00 0.01 0.05 dilm * centos7-node1 [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200/_cat/indices?v #列出索引信息 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .security-7 ByCOlBeLSfStcZVe8Ne0-Q 1 0 6 0 19.6kb 19.6kb
- 寫入數據
[root@centos7-node1 ~]# curl -u elastic:elastic -X POST http://192.168.56.11:9200/test-index/_doc -H 'Content-Type:application/json' -d '{"name": "test-data1","age": 20}' #插入數據 {"_index":"test-index","_type":"_doc","_id":"1Ip3xnUB0a1lyJIOfMTO","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}
- 查看數據
[root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200/test-index/_search?q=* | python -m json.tool #查看數據 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 284 100 284 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2290 { "_shards": { "failed": 0, "skipped": 0, "successful": 1, "total": 1 }, "hits": { "hits": [ { "_id": "1Ip3xnUB0a1lyJIOfMTO", "_index": "test-index", "_score": 1.0, "_source": { "age": 20, "name": "test-data1" }, "_type": "_doc" } ], "max_score": 1.0, "total": { "relation": "eq", "value": 1 } }, "timed_out": false, "took": 104 }
ES數據庫加密集羣部署
ES的分佈式集羣
索引的分片能夠把數據分配到不一樣節點上java
每一個分片可設置值0個或者多個副本node
副本的功能: 備份,提升查詢效率,與集羣中任何一個節點的通訊結果都是一致的python
ES分佈式集羣的部署
- 部署節點信息
主機名 | ip | 軟件 |
---|---|---|
centos7-node1 | 192.168.56.11 | java-1.8.0-openjdk java-1.8.0-openjdk-devel elasticsearch7.6 |
centos7-node2 | 192.168.56.12 | java-1.8.0-openjdk java-1.8.0-openjdk-devel elasticsearch7.6 |
centos7-node3 | 192.168.56.13 | java-1.8.0-openjdk java-1.8.0-openjdk-devel elasticsearch7.6 |
- 軟件和基礎環境的問題參考第一小節
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm yum -y localinstall elasticsearch-7.6.2-x86_64.rpm
- 置(全部節點)
[root@centos7-node1 ~]# vim /etc/elasticsearch/jvm.options -Xms200M -Xmx200M
ES加密集羣
- 建立集羣交互證書
[root@centos7-node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil ca #直接回車兩次便可 [root@centos7-node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /usr/share/elasticsearch/elastic-stack-ca.p12 #直接回車三次便可
- 拷貝證書(拷貝到每臺es服務器上)
[root@centos7-node1 ~]# cp /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/elastic-certificates.p12 # scp 發送證書到其他兩個節點上 [root@centos7-node1 ~]# scp /usr/share/elasticsearch/elastic-certificates.p12 192.168.56.12:/etc/elasticsearch/elastic-certificates.p12 [root@centos7-node1 ~]# scp /usr/share/elasticsearch/elastic-certificates.p12 192.168.56.13:/etc/elasticsearch/elastic-certificates.p12
- 修改證書權限(三個節點都須要修改)
[root@centos7-node1 ~]# chown elasticsearch:elasticsearch /etc/elasticsearch/elastic-certificates.p12 [root@centos7-node1 ~]# ssh 192.168.56.12 "chown elasticsearch:elasticsearch /etc/elasticsearch/elastic-certificates.p12" [root@centos7-node1 ~]# ssh 192.168.56.13 "chown elasticsearch:elasticsearch /etc/elasticsearch/elastic-certificates.p12"
- elasticsearch配置
其中node.master用來作數據匯聚linux
其中node.data是用來作數據存儲和查詢,壓力相對比較大,生產建議將master和data作分離nginx
###### centos7-node1配置文件 [root@centos7-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: cropy node.name: node1 node.master: true node.data: true path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.56.11 http.port: 9200 discovery.seed_hosts: ["192.168.56.11","192.168.56.12","192.168.56.13"] cluster.initial_master_nodes: ["192.168.56.11","192.168.56.12","192.168.56.13"] xpack.security.enabled: true xpack.monitoring.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12 ###### centos7-node2配置文件 [root@centos7-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: cropy node.name: node2 node.master: true node.data: true path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.56.12 http.port: 9200 discovery.seed_hosts: ["192.168.56.11","192.168.56.12","192.168.56.13"] cluster.initial_master_nodes: ["192.168.56.11","192.168.56.12","192.168.56.13"] xpack.security.enabled: true xpack.monitoring.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12 ###### centos7-node3配置文件 [root@centos7-node31 ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: cropy node.name: node3 node.master: true node.data: true path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.56.13 http.port: 9200 discovery.seed_hosts: ["192.168.56.11","192.168.56.12","192.168.56.13"] cluster.initial_master_nodes: ["192.168.56.11","192.168.56.12","192.168.56.13"] xpack.security.enabled: true xpack.monitoring.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12
- 啓動服務
[root@centos7-node1 ~]# systemctl enable elasticsearch && systemctl restart elasticsearch [root@centos7-node1 ~]# tail -f /var/log/elasticsearch/cropy.log #集羣日誌查看(最後會有Yello狀態) ........ [2020-11-14T21:39:53,107][INFO ][o.e.c.r.a.AllocationService] [node1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.security-7][0]]]). [root@centos7-node2 ~]# systemctl enable elasticsearch && systemctl restart elasticsearch [root@centos7-node3 ~]# systemctl enable elasticsearch && systemctl restart elasticsearch
- 狀態查詢
systemctl status elasticsearch ps -ef | grep elasticsearch netstat -tanlp | grep 9200 [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200/_cat/nodes?v ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.56.11 60 92 1 0.00 0.01 0.05 dilm * node1 192.168.56.12 45 93 1 0.00 0.01 0.05 dilm - node2 192.168.56.13 60 94 1 0.05 0.03 0.05 dilm - node3 [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.11:9200/_cat/indices?v
- 重要事項
ES集羣開啓xpack的時候只要有一個節點設置密碼便可,不然不能訪問web
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive #參考命令正則表達式
ES集羣安全交互抓包
- 須要安裝軟件(集羣全部節點)
yum -y install ngrep tcpdump
- 測試
[root@centos7-node1 ~]# ngrep -d ens33 port 9200 \interface: ens33 (192.168.56.0/255.255.255.0) filter: ( port 9200 ) and ((ip || ip6) || (vlan && (ip || ip6))) # T 192.168.56.1:50907 -> 192.168.56.11:9200 [AF] #1 ...... ## T 192.168.56.1:50907 -> 192.168.56.11:9200 [A] #3 ......
9300端口測試結果數據庫
[root@centos7-node1 ~]# ngrep -d ens33 port 9300
ES數據庫的基礎操做
ES概念
- 索引: 相似於數據庫,索引在寫入數據時會自動建立,可按天
- 文檔: 相似於表數據,存儲在ES裏面的數據
ES的基礎操做
- curl的方式: 相對比較麻煩
### 寫入指定ID數據(1 是id) [root@centos7-node1 ~]# curl -u elastic:elastic -X PUT http://192.168.56.11:9200/test-index/_doc/1 -H 'Content-Type: application/json' -d '{"name":"zhangsan","age":30}' ### 查詢數據 [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.13:9200/test-index/_search?q=* [root@centos7-node1 ~]# curl -u elastic:elastic http://192.168.56.13:9200/test-index/_doc/1 ### 寫入隨機ID數據(_doc/ 以後不添加id) [root@centos7-node1 ~]# curl -u elastic:elastic -X POST http://192.168.56.11:9200/test-index/_doc -H 'Content-Type: application/json' -d '{"name":"lisi","age":33}' ### 更新數據(id 爲1 的數據更新) [root@centos7-node1 ~]# curl -u elastic:elastic -X POST http://192.168.56.11:9200/test-index/_update/1 -H 'Content-Type: application/json' -d '{"age":100}' ### 刪除數據 [root@centos7-node1 ~]# curl -u elastic:elastic -X DELETE http://192.168.56.11:9200/test-index/_doc/1 #刪除單條數據 [root@centos7-node1 ~]# curl -u elastic:elastic -X DELETE http://192.168.56.11:9200/test-index #刪除全部數據 ### 測試集羣數據同步 [root@centos7-node1 ~]# curl -u elastic:elastic -X POST http://192.168.56.12:9200/test-index/_doc -H 'Content-Type: application/json' -d '{"name": "wanghui", "age": 29}'
- kibana: 提供簡化的操做界面
Kibana的部署與使用實戰
Kibana用於作數據展現,es的操做簡化json
Kibana的安裝與配置
- 找其中的一臺ES節點作安裝
[root@centos7-node1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-x86_64.rpm [root@centos7-node1 ~]# yum -y localinstall kibana-7.6.2-x86_64.rpm
- 配置kibana
[root@centos7-node1 ~]# cp /etc/kibana/kibana.yml /usr/local/src/ [root@centos7-node1 ~]# vim /etc/kibana/kibana.yml server.port: 5601 server.host: "192.168.56.11" elasticsearch.hosts: ["http://192.168.56.11:9200","http://192.168.56.12:9200","http://192.168.56.13:9200"] elasticsearch.username: "elastic" elasticsearch.password: "elastic" logging.dest: "/tmp/kibaba.log"
- 啓動kibana
[root@centos7-node1 ~]# systemctl start kibana && systemctl enable kibana [root@centos7-node1 ~]# tail -f /tmp/kibaba.log #啓動日誌查看 .... {"type":"log","@timestamp":"2020-11-14T15:07:29Z","tags":["listening","info"],"pid":13907,"message":"Server running at http://192.168.56.11:5601"}
-
訪問kibana
用戶密碼都是elastic (以前es部署的時候添加的)
Kibana簡化ES數據庫的操做
- 簡單查詢
GET / GET /_cat/nodes?v GET /_cat/indices?v
- 數據插入
# 插入數據 PUT /nginx-logs/_doc/1 { "server_name": "www.baidu.com", "IP": "101.11.203.12" } ## 查詢數據 GET /nginx-logs/_doc/1 GET /nginx-logs/_search?q=* # 插入隨機id數據 POST /nginx-logs/_doc { "server_name": "tianyancha.com", "IP": "180.21.33.41" } # 查詢全部數據 GET /nginx-logs/_search?q=* # 更新數據 POST /nginx-logs/_update/1 { "doc": { "IP": "111.23.33.23" } } # 修改全部數據 POST /nginx-logs/_update_by_query { "script": { "source": "ctx._source['IP']='111.23.133.123'" }, "query": { "match_all": {} } } GET /nginx-logs/_search?q=* # 增長字段 POST /nginx-logs/_update_by_query { "script": { "source": "ctx._source['city']='beijing'" }, "query": { "match_all": {} } } # 刪除數據 DELETE /nginx-logs/_doc/1 # 插入數據 POST /nginx-logs1/_doc { "server_name": "tianyancha.com", "IP": "180.21.33.41" } POST /nginx-logs2/_doc { "server_name": "tianyancha.com", "IP": "180.21.33.41" } # 查詢索引 GET /_cat/indices?v # 刪除索引 DELETE /nginx-logs*
ElasticSearch模板使用與Python操做
索引的分片及副本的設置
索引的分片以及副本數的設置: 三臺ES,最多兩個副本,其他的一個要用來存儲主數據
# 設置分片和副本 PUT /nginx-log { "settings": { "number_of_shards": 3, "number_of_replicas": 2 } } GET /_cat/indices?v #獲取分片信息 GET /nginx-log/_search_shards # 插入數據 POST /nginx-logs/_doc { "server_name": "tianyancha.com", "IP": "180.21.33.41" } # 查詢數據分片所在位置(routing也就是數據的ID) GET /nginx-log/_search_shards?routing=GUmH0XUBiqEQwQWjL5hD
索引建立完成以後分片不可修改,副本數能夠修改
# 修改副本數量 PUT /nginx-log/_settings { "number_of_replicas": 1 }
索引的模板
# 獲取索引模板 GET /_template # 簡單索引模板建立 PUT _template/test-nginx { "index_patterns": ["nginx*"], "settings": { "number_of_shards": 2, "number_of_replicas": 2 } } # 插入數據 POST /nginx-logs/_doc { "server_name": "www.tianyancha.com", "IP": "180.21.33.41" } POST /nginx-logs/_doc/1 { "server_name": "shanghai.tianyancha.com", "IP": "180.21.33.42" } POST /nginx-logs/_doc/2 { "server_name": "shanghai.tianyancha.com", "IP": "180.21.33.42" }
python操做ES數據庫集羣
環境準備
- 安裝python36
[root@centos7-node2 ~]# yum -y install python36 python36-devel
- 升級pip3
[root@centos7-node2 ~]# pip3 install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple
- 安裝es7.6 的python庫
[root@centos7-node2 ~]# pip3 install elasticsearch==7.6.0 -i https://mirrors.aliyun.com/pypi/simple
- 檢查ES模塊是否安裝成功
[root@centos7-node2 ~]# python3 Python 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import elasticsearch
python操做ES集羣
- 添加數據
[root@centos7-node2 ~]# vim add_data.py #!/use/bin/python3 from elasticsearch import Elasticsearch es = Elasticsearch(['http://elastic:elastic@192.168.56.11:9200','http://elastic:elastic@192.168.56.12:9200','http://elastic:elastic@192.168.56.13 :9200']) body = {"server_name":"https://baidu.com","IP":"120.99.220.23"} es.index(index='nginx-logs',body=body) print("insert data success!!")
- 查詢數據
[root@centos7-node2 ~]# cat search_data.py #!/use/bin/python3 from elasticsearch import Elasticsearch es = Elasticsearch(['http://elastic:elastic@192.168.56.11:9200','http://elastic:elastic@192.168.56.12:9200','http://elastic:elastic@192.168.56.13:9200']) print(es.search(index='nginx-logs')) [root@centos7-node2 ~]# python3 search_data.py #查詢數據 {'took': 2, 'timed_out': False, '_shards': {'total': 1, 'successful': 1, 'skipped': 0, 'failed': 0}, 'hits': {'total': {'value': 4, 'relation': 'eq'}, 'max_score': 1.0, 'hits': [{'_index': 'nginx-logs', '_type': '_doc', '_id': 'GUmH0XUBiqEQwQWjL5hD', '_score': 1.0, '_source': {'server_name': 'tianyancha.com', 'IP': '180.21.33.41'}}, {'_index': 'nginx-logs', '_type': '_doc', '_id': 'GkmQ0XUBiqEQwQWjDpiD', '_score': 1.0, '_source': {'server_name': 'www.tianyancha.com', 'IP': '180.21.33.41'}}, {'_index': 'nginx-logs', '_type': '_doc', '_id': '1', '_score': 1.0, '_source': {'server_name': 'shanghai.tianyancha.com', 'IP': '180.21.33.42'}}, {'_index': 'nginx-logs', '_type': '_doc', '_id': 'BA-e0XUB4OFcF5NOHHFn', '_score': 1.0, '_source': {'server_name': 'https://baidu.com', 'IP': '120.99.220.23'}}]}}
- 刪除索引
[root@centos7-node2 ~]# vim delete_index.py #!/use/bin/python3 from elasticsearch import Elasticsearch es = Elasticsearch(['http://elastic:elastic@192.168.56.11:9200','http://elastic:elastic@192.168.56.12:9200','http://elastic:elastic@192.168.56.13 :9200']) print(es.indices.delete(index='nginx-logs'))
- 循環添加數據
#!/use/bin/python3 from elasticsearch import Elasticsearch import time es = Elasticsearch(['http://elastic:elastic@192.168.56.11:9200','http://elastic:elastic@192.168.56.12:9200','http://elastic:elastic@192.168.56.13 :9200']) for i in range(1,10000): body = {"server_name":"https://{0}.baidu.com".format(i),"IP":"120.99.220.23","count":i} es.index('nginx-logs',body=body) time.sleep(0.1) print("insert {0}".format(i))
LogStash實用技能實戰
Logstash的安裝和簡單使用
官方文檔: https://www.elastic.co/guide/en/logstash/7.6/index.html
Logstash的功能
- 對日誌進行過濾處理
- 也能用於日誌收集(通常不這麼用)
- 輸入支持: 標準輸入,文本日誌輸入等
- 輸出支持: 標準輸出,ES輸出等
Logstash的安裝部署
- 節點: 192.168.56.14 centos7-node4
- 基本環境
- centos7
- 時間同步
- yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel ntpdate ntpdate time.windows.com
- 安裝logstash
參考: https://www.elastic.co/guide/en/logstash/7.6/installing-logstash.html
[root@centos7-node4 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.2.rpm [root@centos7-node4 ~]# yum localinstall logstash-7.6.2.rpm -y
- 配置logstash JVM參數
[root@centos7-node4 ~]# vim /etc/logstash/jvm.options -Xms200M -Xmx200M
- 配置logstash
[root@centos7-node4 ~]# cp /etc/logstash/logstash-sample.conf /etc/logstash/conf.d/logstash.conf [root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { stdin {} } output { stdout { codec=>rubydebug } }
- logstash的啓動和測試
[root@centos7-node4 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf .... INFO ] 2019-11-09 05:17:20.482 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} logstash /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated { "@version" => "1", "message" => "logstash", "host" => "centos7-node4", "@timestamp" => 2019-11-08T21:17:41.189Z } hello word { "@version" => "1", "message" => "hello word", "host" => "centos7-node4", "@timestamp" => 2019-11-08T21:17:47.686Z ....
Logstash讀取日誌文件實戰
- 環境準備: 安裝ngixn
[root@centos7-node4 ~]# yum -y install nginx [root@centos7-node4 ~]# vim /usr/lib/systemd/system/nginx.service 修改nginx啓動文件 [Unit] Description=The nginx HTTP and reverse proxy server After=network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid # Nginx will fail to start if /run/nginx.pid already exists but has the wrong # SELinux context. This might happen when running `nginx -t` from the cmdline. # https://bugzilla.redhat.com/show_bug.cgi?id=1268621 ExecStartPre=/usr/bin/rm -f /run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/bin/kill -s HUP $MAINPID [Install] WantedBy=multi-user.target [root@centos7-node4 ~]# systemctl start nginx && systemctl enable nginx #服務啓動 [root@centos7-node4 ~]# curl localhost [root@centos7-node4 ~]# tail /var/log/nginx/access.log
- logstash配置
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" } } output { stdout { codec=>rubydebug } }
- 測試logstash
[root@centos7-node4 ~]# systemctl restart logstash [root@centos7-node4 ~]# tail -f /var/log/logstash/logstash-plain.log [2019-11-09T05:29:03,147][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"} [2019-11-09T05:29:03,205][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"93219590-7b15-41b6-b262-c2c4077278f9", :path=>"/var/lib/logstash/uuid"} [2019-11-09T05:29:05,203][INFO ][org.reflections.Reflections] Reflections took 54 ms to scan 1 urls, producing 20 keys and 40 values [2019-11-09T05:29:06,349][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [2019-11-09T05:29:06,375][INFO ][logstash.javapipeline
此時logstash不能正常採集日誌,由於對nginx日誌權限存在問題,因此要改一下nginx日誌權限
[root@centos7-node4 ~]# chmod 775 -R /var/log/nginx/
- 查看message日誌(必定要有新日誌產生)
[root@centos7-node4 ~]# tail -f /var/log/messages
Logstash讀取日誌內容輸出到ES
說明:
logstash支持讀取日誌發送到ES
可是Logstash用來收集日誌比較重,後面將對此作優化
操做實戰
- 清理原先ES集羣數據
- 配置logstash配置發送日誌到ES的配置
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf elasticsearch { input { file { path => "/var/log/nginx/access.log" } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } }
- 重啓logstash
[root@centos7-node4 ~]# systemctl restart logstash [root@centos7-node4 ~]# ps -ef | grep logstash
- 屢次訪問http://192.168.56.14 (這是個人nginx部署所在)
- kibana查看index是否存在
- 加入kibana
- 數據寫入查看
Logstash正則提取Nginx日誌
爲何要提取nginx日誌?
- 使用一整行日誌沒法分析,須要提取單獨的字段
- 分析哪一個IP的訪問量最大
- 分析nginx的相應狀態碼
nginx默認日誌的格式與配置
- 日誌格式
192.168.56.1 - - [09/Nov/2019:05:24:08 +0800] "GET / HTTP/1.1" 200 4833 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" "-"
- 日誌格式配置
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
Grok日誌提取神器
須要掌握正則表達式,藉助kibana的grok工具驗證提取
- 自寫正則提取
- 內置規則提取(提取簡化)
[root@centos7-node4 ~]# cat /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
- 普通正則表達式符號
.
表示任意一個字符
*
表示前面一個字符出現0次或者屢次
[abc]
表示中括號內的任意一個字符
[^abc]
表示非中括號內的字符
[0-9]
表示數字
[a-z]
表示小寫字母
[A-Z]
表示大寫字母
[a-zA-Z]
表示全部字母
[a-zA-Z0-9]
表示全部字母+數字
[^0-9]
表示非數字
^xxx
表示以xxx開頭
xxx$
表示以xxx結尾
\s
表示空白字符
\S
表示非空白字符
\d
表示數字
1.1 kibana日誌提取操做
- 拓展正則表達式,在普通正則表達式的基礎上再進行拓展
%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER:http_version}" %{NUMBER:status} %{NUMBER:body_byte_sent} %{QS} %{QS:http_user_agent}
-
混合正則
(?<remote_addr>\d+\.\d+\.\d+\.\d+) - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER:http_version}" %{NUMBER:status} %{NUMBER:body_byte_sent} %{QS} %{QS:http_user_agent}
logstash grok將日誌寫入ES
- 配置logstash提取日誌配置
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" } } filter { grok { match => { "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}' } remove_field => ["message"] } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } }
- 重啓logstash
[root@centos7-node4 ~]# systemctl restart logstash [root@centos7-node4 ~]# tail -f /var/log/logstash/logstash-plain.log [root@centos7-node4 ~]# ps -ef | grep logstash
- 訪問nginx(192.168.56.14)
Nginx模擬用戶訪問 while true;do curl 192.168.56.14/wanghui666 curl 127.0.0.1 sleep 2 done
- 查看kibana
kibana顯示感嘆號問題的處理
出現感嘆號的緣由就是從新加入分詞,日誌字段出現多個的場景
- kibana索引刷新
- Kibana索引的操做並不會影響到數據,刪除重建也沒問題
- 查看索引
Logstash特殊字段處理與替換
- 去除字段中的引號
mutate { gsub => ["http_user_agent",'"',""] }
- 數字類型的字符串轉換成整型
mutate { convert => { "status" => "integer" } convert => { "body_bytes_sent" => "integer" } }
- 完整logstash以下:
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" } } filter { grok { match => { "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{ NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}' } remove_field => ["message"] } mutate { gsub => ["http_user_agent",'"',""] convert => { "status" => "integer" } convert => { "body_bytes_sent" => "integer" } } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } }
- 重啓logstash
[root@centos7-node4 ~]# kill -1 $(ps -ef | grep logstash | grep -v grep | awk '{print $2}')
- 訪問nginx
- 查看kibana結果
Logstash替換時間戳timestamp
存在的問題:
處理過程以下:
- 刪除index
- 修改logstash
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" start_position => "beginning" sincedb_path => "/dev/null" } } filter { grok { match => { "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{ NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}' } remove_field => ["message"] } mutate { gsub => ["http_user_agent",'"',""] convert => { "status" => "integer" } convert => { "body_bytes_sent" => "integer" } } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } }
- 重啓logstash
[root@centos7-node4 ~]# kill -1 $(ps -ef | grep logstash | grep -v grep | awk '{print $2}') [root@centos7-node4 ~]# tail -f /var/log/messages
- 問題浮現
使用nginx日誌中的訪問日期覆蓋kibana上的時間
- logstash配置以下
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" start_position => "beginning" sincedb_path => "/dev/null" } } filter { grok { match => { "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{ NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}' } remove_field => ["message"] } date { match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"] target => "@timestamp" } mutate { gsub => ["http_user_agent",'"',""] convert => { "status" => "integer" } convert => { "body_bytes_sent" => "integer" } remove_field => ["time_local"] } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } }
- 刪除日誌索引
- 重啓logstash
[root@centos7-node4 ~]# kill -1 $(ps -ef | grep logstash | grep -v grep | awk '{print $2}') [root@centos7-node4 ~]# tail -f /var/log/messages
- kibana展現查看
注意
日誌裏若是有不一樣的時間格式,覆蓋的時候格式要對應
20/Feb/2019:14:50:06 -> dd/MMM/yyyy:HH:mm:ss
2016-08-24 18:05:39,830 -> yyyy-MM-dd HH:mm:ss,SSS
Logstash正則提取與異常處理
- 在nginx日誌中加入異常字段
[root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log
- kibana查看到異常的日誌
- 將異常數據發送到另外的index
[root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/nginx/access.log" start_position => "beginning" sincedb_path => "/dev/null" } } hosts => ["http://192.168.238.90:9200", "http://192.168.238.92:9200"] hosts => ["http://192.168.238.90:9200", "http://192.168.238.92:9200"] output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } } input { file { path => "/var/log/nginx/access.log" start_position => "beginning" sincedb_path => "/dev/null" } } filter { grok { match => { "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{ NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}' } remove_field => ["message"] } date { match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"] target => "@timestamp" } mutate { gsub => ["http_user_agent",'"',""] convert => { "status" => "integer" } convert => { "body_bytes_sent" => "integer" } remove_field => ["time_local"] } } output { if "_grokparsefailure" not in [tags] and "_dateparsefailure" not in [tags] { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-logs-%{+YYYY-MM-dd}" } } else{ elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "nginx-err-logs-%{+YYYY-MM-dd}" } } }
- 重啓logstash
[root@centos7-node4 ~]# kill -1 $(ps -ef | grep logstash | grep -v grep | awk '{print $2}') [root@centos7-node4 ~]# tail -f /var/log/messages
- 再次寫入異常數據
[root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log [root@centos7-node4 ~]# echo "sdadsadsdsadsfac sadf fwekfwekwfadfaofcr" >> /var/log/nginx/access.log
- 新增kibana 數據
Kibana圖形使用簡介
- 模擬數據(nginx 所在機器)
[root@centos7-node4 ~]# while true;do curl 192.168.56.14/cropy666; curl 127.0.0.1; sleep 2; done
-
首頁區域
能夠根據時間查看訪問量:每分鐘訪問量
能夠根據某個字段查詢
能夠單獨看某個字段的統計 -
Kibana圖形有創建,選擇terms去查看對應的數據
餅圖的建立 pie_remote_addr表的建立 table_remote_addr
- Kibana面板的建立cropy_dash
建立面板
在面板上添加圖形
- 建議採用Grafana展現
Logstash分析Linux系統日誌
- 系統日誌
[root@centos7-node4 ~]# cat /var/log/secure Nov 21 20:47:54 centos7-node4 polkitd[712]: Registered Authentication Agent for unix-process:2314:2063304 (system bus name :1.177 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) Nov 21 20:47:56 centos7-node4 polkitd[712]: Unregistered Authentication Agent for unix-process:2314:2063304 (system bus name :1.177, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
- 修改系統日誌讓支持年份
[root@centos7-node4 ~]# vim /etc/rsyslog.conf # Use default timestamp format # $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $template tycformat,"%$NOW% %TIMESTAMP:8:15% %hostname% %syslogtag% %msg%\n" $ActionFileDefaultTemplate tycformat [root@centos7-node4 ~]# systemctl restart rsyslog
- 驗證是否生效
[root@centos7-node4 ~]# tail -f /var/log/secure Nov 21 20:47:54 centos7-node4 polkitd[712]: Registered Authentication Agent for unix-process:2314:2063304 (system bus name :1.177 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) Nov 21 20:47:56 centos7-node4 polkitd[712]: Unregistered Authentication Agent for unix-process:2314:2063304 (system bus name :1.177, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) Nov 22 00:05:36 centos7-node4 polkitd[712]: Registered Authentication Agent for unix-process:5177:3249450 (system bus name :1.308 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) 2020-11-22 00:05:36 centos7-node4 polkitd[712]: Unregistered Authentication Agent for unix-process:5177:3249450 (system bus name :1.308, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) 2020-11-22 00:06:23 centos7-node4 sshd[5191]: Accepted password for root from 192.168.56.1 port 57635 ssh2 2020-11-22 00:06:23 centos7-node4 sshd[5191]: pam_unix(sshd:session): session opened for user root by (uid=0)
- kibana grok debug 解決日誌格式問題
2020-11-22 00:06:23 centos7-node4 sshd[5191]: Accepted password for root from 192.168.56.1 port 57635 ssh2 %{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE} %{NOTSPACE:procinfo}: (?<secinfo>.*)
- logstash採集系統日誌(message 日誌也相似)
[root@centos7-node4 ~]# chmod +r /var/log/secure [root@centos7-node4 ~]# vim /etc/logstash/conf.d/logstash-sys.conf input { file { path => "/var/log/secure" } } filter { grok { match => { "message" => '%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE} %{NOTSPACE:procinfo}: (?<secinfo>.*)' } remove_field => ["message"] } date { match => ["timestamp", "yyyy-MM-dd HH:mm:ss"] target => "@timestamp" } mutate { remove_field => ["timestamp"] } } output { elasticsearch { hosts => ["http://192.168.56.11:9200","192.168.56.12:9200","192.168.56.13:9200"] user => "elastic" password => "elastic" index => "system-secure-%{+YYYY.MM.dd}" } }
- 重啓logstash
[root@centos7-node4 ~]# systemctl restart logstash [root@centos7-node4 ~]# tail -f /var/log/messages
- kibana加入index