1、elasticsearch搭建及基本操做
1.elasticsearch簡介
elasticsearch(之後簡稱es)的做用就不作太多說明,官網一句You Know, for Search已經歸納其核心功能。對應原理性的東西,索引、集羣、切片等功能不是本文重點,可另尋文章學習。
es的版本區別(摘自es官網):
Elasticsearch 5.6.0 支持多type,可經過配置index.mapping.single_type:true設置爲僅支持單個type
Elasticsearch 6.x Indices created in 6.x only allow a single-type per index(僅支持單個type)
Elasticsearch 7.X Specifying types in requests is deprecated(指定type的請求方式已過期),
原有請求的type部分固定爲_doc,其實就是固定一個名爲_doc的type,官方稱其爲虛類型,做爲_search、_source這樣的endpoint端點進行理解
Elasticsearch 8.x Specifying types in requests is no longer supported(再也不支持還有type的請求方式)java
2.安裝
a.拉取鏡像
docker pull elasticsearch:7.3.0
b.運行容器
docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name es01 elasticsearch:7.3.0
c.修改配置
進入容器,修改/usr/share/elasticsearch/config/elasticsearch.yml
network.host: 0.0.0.0node
d.重啓容器python
3.建立索引和記錄
restapi操做語法
a.curl -X PUT http://host:port/index?pretty=true
b.curl -X PUT http://host:port/index/type/id -H 'Content-Type:application/json' -d 'json content' 對於7.X的版本,type部分固定爲_doc便可
c.curl -X POST http://host:port/index/type -H 'Content-Type:application/json' -d 'json content' 對於7.X的版本,type部分固定爲_doc便可
注意b和c區別,請求的方法不一樣,而且b指定了id,c未指定id,這二者就是自行指定id和es自動生成id的建立記錄方法docker
book索引下面建立一個1的記錄
curl -X PUT http://localhost:9200/book/_doc/1?pretty=true -H 'Content-Type:application/json' -d '{"name":"thinking in java","ver":"1.0"}'json
{
"_index" : "book",
"_type" : "_doc",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1
}api
若重複PUT同一個記錄則會更新,結果結果以下
{
"_index" : "book",
"_type" : "_doc",
"_id" : "1",
"_version" : 2,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 2,
"_primary_term" : 1
}tomcat
繼續在book索引下面名爲python的type下面建立一個1的記錄
curl -X PUT http://localhost:9200/book/python/1?pretty=true -H 'Content-Type:application/json' -d '{"name":"python1","version":"1.0.0","description":"this is for python"}'
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Rejecting mapping update to [book] as the final mapping would have more than 1 type: [_doc, python]"
}
],
"type" : "illegal_argument_exception",
"reason" : "Rejecting mapping update to [book] as the final mapping would have more than 1 type: [_doc, python]"
},
"status" : 400
}
能夠看到建立結果是失敗,緣由是6.0以上的版本不容許一個index下面有多個typeapp
4.查詢記錄
restapi操做語法
curl http://host:port/index/_search 查詢索引的全部記錄
curl http://host:port/index/type/1 查詢id=1的記錄curl
查詢示例
curl http://localhost:9200/book/_search?pretty=true
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "book",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"name" : "thinking in java",
"ver" : "1.0"
}
}
]
}
}elasticsearch
curl http://localhost:9200/book/_doc/1?pretty=true
{
"_index" : "book",
"_type" : "_doc",
"_id" : "1",
"_version" : 1,
"_seq_no" : 0,
"_primary_term" : 1,
"found" : true,
"_source" : {
"name" : "thinking in java",
"ver" : "1.0"
}
}
條件查詢語法
a.查詢條件
"query" : {
"term" : { "user" : "kimchy" }
}
b.排序
"sort":[
{ "post_date":{ "order":"asc" }},
{ "name":"desc" }
]
c.窗口控制
"from":0
"size":10
d.字段投影
"_source": {
"include": [ "filed1", "field2" ],
"exclude": [ "field3" ]
}
5.刪除索引和記錄
restapi操做語法
curl -X DELETE http://host:port/index
curl -X DELETE http://host:port/index/type/record
刪除示例
curl -X DELETE http://localhost:9200/book?pretty=true
curl -X DELETE http://localhost:9200/book/_doc/1?pretty=true
6.更新記錄
restapi操做語法
curl -X PUT http://host:port/index/type/id -H 'Content-Type:application/json' -d 'json content' 建立已存在的記錄則會執行更新操做
curl -X POST http://host:port/index/type/id/_update -H 'Content-Type:application/json' -d '{"doc":{"filed":"value"}}' 經過_update端點更新7.X之前的寫法
curl -X POST http://host:port/index/_update/id -H 'Content-Type:application/json' -d '{"doc":{"filed":"value"}}' 經過_update端點更新7.X之後的寫法
更新示例
curl http://localhost:9200/book/_update/1?pretty=true -X POST -H 'Content-Type:application/json' -d '{"doc":{"name":"python programming"}}'
{
"_index" : "book",
"_type" : "_doc",
"_id" : "1",
"_version" : 3,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 5,
"_primary_term" : 1
}
2、ELK日誌採集分析系統搭建
1.日誌採集分析過程
數據輸入(採集)->數據處理->數據輸出(存儲)
a.採集日誌文件信息,實時跟蹤日誌文件的變化,獲取變化結果 filebeat
b.日誌格式化 logstash
c.日誌存儲於查詢 es
d.可視化工具 kibana
2.filebeat搭建及配置
logstash自己就支持直接從文件中讀取數據進行處理,還要通過filebeat採集是由於其輕量級且無需java運行環境等優勢
a.拉取鏡像
docker pull prima/filebeat:6.4.2
b.編寫配置文件
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/shared_disk/tomcat_logs/*.log
#output.logstash:
# enabled: true
# hosts: ["localhost:5044"]
output.console:
enabled: true
pretty: true
簡單的配置,採集到的的日誌輸出到控制檯。複雜的配置目前還沒過多研究,能夠查閱其餘資料
c.啓動容器
docker run -d --name filebeat01 -v /home/shared_disk/filebeat/filebeat.yml:/filebeat.yml -v /home/shared_disk/tomcat_logs:/home/shared_disk/tomcat_logs prima/filebeat:6.4.2
注意掛載的目錄和文件,確保能收集到日誌文件
此時/home/shared_disk/tomcat_logs/目錄下的.log文件日誌有信息則能夠看到filebeat的控制檯有如下輸出,說明採集成功
{
"@timestamp": "2019-08-23T06:51:37.889Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.4.2"
},
"source": "/home/shared_disk/tomcat_logs/docker-learn-info.log",
"offset": 47547,
"message": "2019-08-23 06:51:34,030 [http-nio-8083-exec-9] INFO com.allen.dockerlearn.DockerLearnApplication - this is the message for test",
"prospector": {
"type": "log"
},
"input": {
"type": "log"
},
"beat": {
"hostname": "efd575c7aa99",
"version": "6.4.2",
"name": "efd575c7aa99"
},
"host": {
"name": "efd575c7aa99"
}
}
3.logstash搭建及配置
logstash的做用就是對輸入的數據進行格式化、類型轉換、新增字段、刪除字段等處理,獲得用戶想要的數據格式並輸出到指定位置
a.拉取鏡像
docker pull logstash:7.3.0
b.啓動容器
docker run -d -p 5044:5044 -p 9600:9600 -v /home/shared_disk/tomcat_logs/:/home/shared_disk/tomcat_logs/ --name logstash01 logstash:7.3.0
須要注意的是咱們這裏是logstash的輸入是日誌文件,須要注意tomcat_logs目錄權限問題,由於docker容器使用logstash用戶啓動,若是沒有權限,將沒法訪問掛載目錄
c.進入容器/usr/share/logstash/config目錄下修改配置文件
修改pipelines.yml
- pipeline.id: logstash-1
path.config: "/usr/share/logstash/config/*.conf"
建立my-logstash.conf
input {
file {
path => "/home/shared_disk/tomcat_logs/*.log"
type => "system"
start_position => "beginning"
}
}
filter {
}
output {
stdout {
}
}
d.重啓docker容器,查看容器日誌能夠看到控制檯打印了以下信息
{
"path" => "/home/shared_disk/tomcat_logs/docker-learn-info.log",
"@timestamp" => 2019-08-23T09:32:57.085Z,
"host" => "d677fed8b760",
"@version" => "1",
"message" => "2019-08-23 09:32:56,976 [http-nio-8083-exec-3] INFO com.allen.dockerlearn.DockerLearnApplication - this is the message for test",
"type" => "system"
}
4.kibana搭建
a.拉取鏡像
docker pull kibana:7.3.0
b.啓動容器
docker run -d -p 5601:5601 --name kibana01 kibana:7.3.0
c.修改配置文件
/usr/share/kibana/config/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://192.168.16.84:9200" ]
d.重啓docker容器
5.elk集成
基於上述的搭建及配置過程,咱們如今只須要修改filebeat的輸出,logstash的輸入和輸出便可完成elk的集成
a.filebeat輸出修改
output.logstash:
enabled: true
hosts: "192.168.16.84:5044"
b.logstash出入修改
beats {
port => 5044
}
c.logstash輸出修改
elasticsearch {
hosts => "192.168.16.84:9200"
index => "api-log"
}
d.日誌格式化配置
logstash接收到的filebeat傳過來的數據有不少,有filebeat自帶的還有logstash自帶的,可是咱們最爲關注的只有message部分,也就是如下部分
"message" => "2019-08-23 09:32:56,976 [http-nio-8083-exec-3] INFO com.allen.dockerlearn.DockerLearnApplication - this is the message for test"
因此咱們須要讀數據進行處理,在logstash的過濾器添加如下配置:
filter {
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:log_time}\s+\[%{DATA:thread_name}\]\s+%{LOGLEVEL:log_level}\s+%{DATA:class_name}\s+-\s+(?<log_info>(\w+\s*)*)"
}
}
mutate {
add_field => {
"host_name" => "%{[host][name]}"
}
remove_field => ["source","beat","host","@version","@timestamp","prospector","input","tags","offset","message"]
}
}
通過此過濾器,咱們提取了message中的部分字段,起名爲log_time、thread_name、log_level、class_name、log_info,
新增了host_name字段,其值是從原有的host.name這個字段提取的,刪除了source、beat...message等字段,最終的數據格式以下
{
"thread_name" => "http-nio-8083-exec-1",
"log_time" => "2019-08-26 13:31:11,029",
"host_name" => "efd575c7aa99",
"log_info" => "this is the message for test",
"class_name" => "com.allen.dockerlearn.DockerLearnApplication",
"log_level" => "INFO"
}
至此ELK集成已經完成
打開kibana主頁,在設置裏面建立索引以後,到discovery欄目便可進行日誌搜索與查看,visualize欄目能夠進行日誌統計與分析