示例: 對如下三個文檔去除停用詞後構造倒排索引 html
查詢包含「搜索引擎」的文檔git
單詞詞典的實現通常用B+樹,B+樹構造的可視化過程網址: B+ Tree Visualizationgithub
關於B樹和B+樹正則表達式
B+樹內部結點存索引,葉子結點存數據,這裏的 單詞詞典就是B+樹索引,倒排列表就是數據,整合在一塊兒後以下所示算法
note: B+樹索引中文和英文怎麼比較大小呢?unicode比較仍是拼音呢?bash
ES存儲的是一個JSON格式的文檔,其中包含多個字段,每一個字段會有本身的倒排索引微信
分詞是將文本轉換成一系列單詞(Term or Token)的過程,也能夠叫文本分析,在ES裏面稱爲Analysis網絡
分詞器是ES中專門處理分詞的組件,英文爲Analyzer,它的組成以下:app
分詞器調用順序 elasticsearch
ES提供了一個能夠測試分詞的API接口,方便驗證分詞效果,endpoint是_analyze
POST test_index/doc
{
"username": "whirly",
"age":22
}
POST test_index/_analyze
{
"field": "username",
"text": ["hello world"]
}
複製代碼
POST _analyze
{
"tokenizer": "standard",
"filter": ["lowercase"],
"text": ["Hello World"]
}
複製代碼
ES自帶的分詞器有以下:
示例:停用詞分詞器
POST _analyze
{
"analyzer": "stop",
"text": ["The 2 QUICK Brown Foxes jumped over the lazy dog's bone."]
}
複製代碼
結果
{
"tokens": [
{
"token": "quick",
"start_offset": 6,
"end_offset": 11,
"type": "word",
"position": 1
},
{
"token": "brown",
"start_offset": 12,
"end_offset": 17,
"type": "word",
"position": 2
},
{
"token": "foxes",
"start_offset": 18,
"end_offset": 23,
"type": "word",
"position": 3
},
{
"token": "jumped",
"start_offset": 24,
"end_offset": 30,
"type": "word",
"position": 4
},
{
"token": "over",
"start_offset": 31,
"end_offset": 35,
"type": "word",
"position": 5
},
{
"token": "lazy",
"start_offset": 40,
"end_offset": 44,
"type": "word",
"position": 7
},
{
"token": "dog",
"start_offset": 45,
"end_offset": 48,
"type": "word",
"position": 8
},
{
"token": "s",
"start_offset": 49,
"end_offset": 50,
"type": "word",
"position": 9
},
{
"token": "bone",
"start_offset": 51,
"end_offset": 55,
"type": "word",
"position": 10
}
]
}
複製代碼
# 在Elasticsearch安裝目錄下執行命令,而後重啓es
bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-6.3.0.zip
# 若是因爲網絡慢,安裝失敗,能夠先下載好zip壓縮包,將下面命令改成實際的路徑,執行,而後重啓es
bin/elasticsearch-plugin install file:///path/to/elasticsearch-analysis-ik-6.3.0.zip
複製代碼
POST _analyze
{
"analyzer": "ik_smart",
"text": ["公安部:各地校車將享最高路權"]
}
# 結果
{
"tokens": [
{
"token": "公安部",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
},
{
"token": "各地",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 1
},
{
"token": "校車",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 2
},
{
"token": "將",
"start_offset": 8,
"end_offset": 9,
"type": "CN_CHAR",
"position": 3
},
{
"token": "享",
"start_offset": 9,
"end_offset": 10,
"type": "CN_CHAR",
"position": 4
},
{
"token": "最高",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 5
},
{
"token": "路",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 6
},
{
"token": "權",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 7
}
]
}
複製代碼
POST _analyze
{
"analyzer": "ik_max_word",
"text": ["公安部:各地校車將享最高路權"]
}
# 結果
{
"tokens": [
{
"token": "公安部",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
},
{
"token": "公安",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 1
},
{
"token": "部",
"start_offset": 2,
"end_offset": 3,
"type": "CN_CHAR",
"position": 2
},
{
"token": "各地",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 3
},
{
"token": "校車",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 4
},
{
"token": "將",
"start_offset": 8,
"end_offset": 9,
"type": "CN_CHAR",
"position": 5
},
{
"token": "享",
"start_offset": 9,
"end_offset": 10,
"type": "CN_CHAR",
"position": 6
},
{
"token": "最高",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 7
},
{
"token": "路",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 8
},
{
"token": "權",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 9
}
]
}
複製代碼
ik_max_word: 會將文本作最細粒度的拆分,好比會將「中華人民共和國國歌」拆分爲「中華人民共和國,中華人民,中華,華人,人民共和國,人民,人,民,共和國,共和,和,國國,國歌」,會窮盡各類可能的組合;
ik_smart: 會作最粗粒度的拆分,好比會將「中華人民共和國國歌」拆分爲「中華人民共和國,國歌」。
當自帶的分詞沒法知足需求時,能夠自定義分詞,經過定義Character Filters、Tokenizer和Token Filters實現
POST _analyze
{
"tokenizer": "keyword",
"char_filter": ["html_strip"],
"text": ["<p>I'm so <b>happy</b>!</p>"]
}
# 結果
{
"tokens": [
{
"token": """ I'm so happy! """,
"start_offset": 0,
"end_offset": 32,
"type": "word",
"position": 0
}
]
}
複製代碼
POST _analyze
{
"tokenizer": "path_hierarchy",
"text": ["/path/to/file"]
}
# 結果
{
"tokens": [
{
"token": "/path",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 0
},
{
"token": "/path/to",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 0
},
{
"token": "/path/to/file",
"start_offset": 0,
"end_offset": 13,
"type": "word",
"position": 0
}
]
}
複製代碼
POST _analyze
{
"text": [
"a Hello World!"
],
"tokenizer": "standard",
"filter": [
"stop",
"lowercase",
{
"type": "ngram",
"min_gram": 4,
"max_gram": 4
}
]
}
# 結果
{
"tokens": [
{
"token": "hell",
"start_offset": 2,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "ello",
"start_offset": 2,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "worl",
"start_offset": 8,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "orld",
"start_offset": 8,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 2
}
]
}
複製代碼
自定義分詞須要在索引配置中設定 char_filter、tokenizer、filter、analyzer等
自定義分詞示例:
PUT test_index_1
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"tokenizer": "standard",
"char_filter": [
"html_strip"
],
"filter": [
"uppercase",
"asciifolding"
]
}
}
}
}
}
複製代碼
POST test_index_1/_analyze
{
"analyzer": "my_custom_analyzer",
"text": ["<p>I'm so <b>happy</b>!</p>"]
}
# 結果
{
"tokens": [
{
"token": "I'M",
"start_offset": 3,
"end_offset": 11,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "SO",
"start_offset": 12,
"end_offset": 14,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "HAPPY",
"start_offset": 18,
"end_offset": 27,
"type": "<ALPHANUM>",
"position": 2
}
]
}
複製代碼
分詞會在以下兩個時機使用:
更多內容請訪問個人我的網站: laijianfeng.org
參考文檔:
- elasticsearch 官方文檔
- 慕課網 Elastic Stack從入門到實踐
歡迎關注個人微信公衆號