搜索是大數據領域裏常見的需求。Splunk和ELK分別是該領域在非開源和開源領域裏的領導者。本文利用不多的Python代碼實現了一個基本的數據搜索功能,試圖讓你們理解大數據搜索的基本原理。web
布隆過濾器 (Bloom Filter)算法
第一步咱們先要實現一個布隆過濾器。數組
布隆過濾器是大數據領域的一個常見算法,它的目的是過濾掉那些不是目標的元素。也就是說若是一個要搜索的詞並不存在於個人數據中,那麼它能夠以很快的速度返回目標不存在。微信
讓咱們看看如下布隆過濾器的代碼:數據結構
1class Bloomfilter(object):
2 """
3 A Bloom filter is a probabilistic data-structure that trades space for accuracy
4 when determining if a value is in a set. It can tell you if a value was possibly
5 added, or if it was definitely not added, but it can't tell you for certain that
6 it was added.
7 """
8 def __init__(self, size):
9 """Setup the BF with the appropriate size"""
10 self.values = [False] * size
11 self.size = size
12
13 def hash_value(self, value):
14 """Hash the value provided and scale it to fit the BF size"""
15 return hash(value) % self.size
16
17 def add_value(self, value):
18 """Add a value to the BF"""
19 h = self.hash_value(value)
20 self.values[h] = True
21
22 def might_contain(self, value):
23 """Check if the value might be in the BF"""
24 h = self.hash_value(value)
25 return self.values[h]
26
27 def print_contents(self):
28 """Dump the contents of the BF for debugging purposes"""
29 print self.values
app
1基本的數據結構是個數組(其實是個位圖,用1/0來記錄數據是否存在),初始化是沒有任何內容,因此所有置False。實際的使用當中,該數組的長度是很是大的,以保證效率。
2利用哈希算法來決定數據應該存在哪一位,也就是數組的索引
3當一個數據被加入到布隆過濾器的時候,計算它的哈希值而後把相應的位置爲True
4當檢查一個數據是否已經存在或者說被索引過的時候,只要檢查對應的哈希值所在的位的True/Fasle
ide
看到這裏,你們應該能夠看出,若是布隆過濾器返回False,那麼數據必定是沒有索引過的,然而若是返回True,那也不能說數據必定就已經被索引過。在搜索過程當中使用布隆過濾器可使得不少沒有命中的搜索提早返回來提升效率。大數據
咱們看看這段代碼是如何運行的:this
1bf = Bloomfilter(10)
2bf.add_value('dog')
3bf.add_value('fish')
4bf.add_value('cat')
5bf.print_contents()
6bf.add_value('bird')
7bf.print_contents()
8# Note: contents are unchanged after adding bird - it collides
9for term in ['dog', 'fish', 'cat', 'bird', 'duck', 'emu']:
10 print '{}: {} {}'.format(term, bf.hash_value(term), bf.might_contain(term))
搜索引擎
結果:
14[False, False, False, False, True, True, False, False, False, True]
15[False, False, False, False, True, True, False, False, False, True]
16dog: 5 True
17fish: 4 True
18cat: 9 True
19bird: 9 True
20duck: 5 True
21emu: 8 False
首先建立了一個容量爲10的的布隆過濾器:
而後分別加入 ‘dog’,‘fish’,‘cat’三個對象,這時的布隆過濾器的內容以下:
而後加入‘bird’對象,布隆過濾器的內容並無改變,由於‘bird’和‘fish’剛好擁有相同的哈希。
最後咱們檢查一堆對象('dog', 'fish', 'cat', 'bird', 'duck', 'emu')是否是已經被索引了。結果發現‘duck’返回True,2而‘emu’返回False。由於‘duck’的哈希剛好和‘dog’是同樣的。
分詞
下面一步咱們要實現分詞。 分詞的目的是要把咱們的文本數據分割成可搜索的最小單元,也就是詞。這裏咱們主要針對英語,由於中文的分詞涉及到天然語言處理,比較複雜,而英文基本只要用標點符號就行了。
下面咱們看看分詞的代碼:
1def major_segments(s):
2 """
3 Perform major segmenting on a string. Split the string by all of the major
4 breaks, and return the set of everything found. The breaks in this implementation
5 are single characters, but in Splunk proper they can be multiple characters.
6 A set is used because ordering doesn't matter, and duplicates are bad.
7 """
8 major_breaks = ' '
9 last = -1
10 results = set()
11
12 # enumerate() will give us (0, s[0]), (1, s[1]), ...
13 for idx, ch in enumerate(s):
14 if ch in major_breaks:
15 segment = s[last+1:idx]
16 results.add(segment)
17
18 last = idx
19
20 # The last character may not be a break so always capture
21 # the last segment (which may end up being "", but yolo)
22 segment = s[last+1:]
23 results.add(segment)
24
25 return results
主要分割
主要分割使用空格來分詞,實際的分詞邏輯中,還會有其它的分隔符。例如Splunk的缺省分割符包括如下這些,用戶也能夠定義本身的分割符。
1] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29
2
3def minor_segments(s):
4 """
5 Perform minor segmenting on a string. This is like major
6 segmenting, except it also captures from the start of the
7 input to each break.
8 """
9 minor_breaks = '_.'
10 last = -1
11 results = set()
12
13 for idx, ch in enumerate(s):
14 if ch in minor_breaks:
15 segment = s[last+1:idx]
16 results.add(segment)
17
18 segment = s[:idx]
19 results.add(segment)
20
21 last = idx
22
23 segment = s[last+1:]
24 results.add(segment)
25 results.add(s)
26
27 return results
28
次要分割
次要分割和主要分割的邏輯相似,只是還會把從開始部分到當前分割的結果加入。例如「1.2.3.4」的次要分割會有1,2,3,4,1.2,1.2.3
1def segments(event):
2 """Simple wrapper around major_segments / minor_segments"""
3 results = set()
4 for major in major_segments(event):
5 for minor in minor_segments(major):
6 results.add(minor)
7 return results
分詞的邏輯就是對文本先進行主要分割,對每個主要分割再進行次要分割。而後把全部分出來的詞返回。
咱們看看這段代碼是如何運行的:
1for term in segments('src_ip = 1.2.3.4'):
2 print term
3
4src
51.2
61.2.3.4
7src_ip
83
91
101.2.3
11ip
122
13=
144
搜索
好了,有了分詞和布隆過濾器這兩個利器的支撐後,咱們就能夠來實現搜索的功能了。
上代碼:
1class Splunk(object):
2 def __init__(self):
3 self.bf = Bloomfilter(64)
4 self.terms = {} # Dictionary of term to set of events
5 self.events = []
6
7 def add_event(self, event):
8 """Adds an event to this object"""
9
10 # Generate a unique ID for the event, and save it
11 event_id = len(self.events)
12 self.events.append(event)
13
14 # Add each term to the bloomfilter, and track the event by each term
15 for term in segments(event):
16 self.bf.add_value(term)
17
18 if term not in self.terms:
19 self.terms[term] = set()
20 self.terms[term].add(event_id)
21
22 def search(self, term):
23 """Search for a single term, and yield all the events that contain it"""
24
25 # In Splunk this runs in O(1), and is likely to be in filesystem cache (memory)
26 if not self.bf.might_contain(term):
27 return
28
29 # In Splunk this probably runs in O(log N) where N is the number of terms in the tsidx
30 if term not in self.terms:
31 return
32
33 for event_id in sorted(self.terms[term]):
34 yield self.events[event_id]
1Splunk表明一個擁有搜索功能的索引集合
2每個集合中包含一個布隆過濾器,一個倒排詞表(字典),和一個存儲全部事件的數組
3當一個事件被加入到索引的時候,會作如下的邏輯
4 爲每個事件生成一個unqie id,這裏就是序號
5 對事件進行分詞,把每個詞加入到倒排詞表,也就是每個詞對應的事件的id的映射結構,注意,一個詞可能對應多個事件,因此倒排表的的值是一個Set。倒排表是絕大部分搜索引擎的核心功能。
6當一個詞被搜索的時候,會作如下的邏輯
7 檢查布隆過濾器,若是爲假,直接返回
8 檢查詞表,若是被搜索單詞不在詞表中,直接返回
9 在倒排表中找到全部對應的事件id,而後返回事件的內容
咱們運行下看看吧:
1s = Splunk()
2s.add_event('src_ip = 1.2.3.4')
3s.add_event('src_ip = 5.6.7.8')
4s.add_event('dst_ip = 1.2.3.4')
5
6for event in s.search('1.2.3.4'):
7 print event
8print '-'
9for event in s.search('src_ip'):
10 print event
11print '-'
12for event in s.search('ip'):
13 print event
14
15src_ip = 1.2.3.4
16dst_ip = 1.2.3.4
17-
18src_ip = 1.2.3.4
19src_ip = 5.6.7.8
20-
21src_ip = 1.2.3.4
22src_ip = 5.6.7.8
23dst_ip = 1.2.3.4
是否是很贊!
更復雜的搜索
更進一步,在搜索過程當中,咱們想用And和Or來實現更復雜的搜索邏輯。
上代碼:
32class SplunkM(object):
33 def __init__(self):
34 self.bf = Bloomfilter(64)
35 self.terms = {} # Dictionary of term to set of events
36 self.events = []
37
38 def add_event(self, event):
39 """Adds an event to this object"""
40
41 # Generate a unique ID for the event, and save it
42 event_id = len(self.events)
43 self.events.append(event)
44
45 # Add each term to the bloomfilter, and track the event by each term
46 for term in segments(event):
47 self.bf.add_value(term)
48 if term not in self.terms:
49 self.terms[term] = set()
50
51 self.terms[term].add(event_id)
52
53 def search_all(self, terms):
54 """Search for an AND of all terms"""
55
56 # Start with the universe of all events...
57 results = set(range(len(self.events)))
58
59 for term in terms:
60 # If a term isn't present at all then we can stop looking
61 if not self.bf.might_contain(term):
62 return
63 if term not in self.terms:
64 return
65
66 # Drop events that don't match from our results
67 results = results.intersection(self.terms[term])
68
69 for event_id in sorted(results):
70 yield self.events[event_id]
71
72
73 def search_any(self, terms):
74 """Search for an OR of all terms"""
75 results = set()
76
77 for term in terms:
78 # If a term isn't present, we skip it, but don't stop
79 if not self.bf.might_contain(term):
80 continue
81 if term not in self.terms:
82 continue
83
84 # Add these events to our results
85 results = results.union(self.terms[term])
86
87 for event_id in sorted(results):
88 yield self.events[event_id]
利用Python集合的intersection和union操做,能夠很方便的支持And(求交集)和Or(求合集)的操做。
運行結果以下:
1s = SplunkM()
2s.add_event('src_ip = 1.2.3.4')
3s.add_event('src_ip = 5.6.7.8')
4s.add_event('dst_ip = 1.2.3.4')
5
6for event in s.search_all(['src_ip', '5.6']):
7 print event
8print '-'
9for event in s.search_any(['src_ip', 'dst_ip']):
10 print event
11
12src_ip = 5.6.7.8
13-
14src_ip = 1.2.3.4
15src_ip = 5.6.7.8
16dst_ip = 1.2.3.4
總結
以上的代碼只是爲了說明大數據搜索的基本原理,包括布隆過濾器,分詞和倒排表。若是你們真的想要利用這代碼來實現真正的搜索功能,還差的太遠。全部的內容來自於Splunk Conf2017。你們若是有興趣能夠訪問以下連接觀看視頻和PPT。
視頻:
https://conf.splunk.com/files/2017/recordings/a-trip-through-the-splunk-data-ingestion-and-retrieval-pipeline.mp4
PPT:
https://conf.splunk.com/files/2017/slides/a-trip-through-the-splunk-data-ingestion-and-retrieval-pipeline.pdf
| 做者:Naughty
| 來源:開源中國
本文分享自微信公衆號 - DataScience(DataScienceTeam)。
若有侵權,請聯繫 support@oschina.cn 刪除。
本文參與「OSC源創計劃」,歡迎正在閱讀的你也加入,一塊兒分享。