最近上數據挖掘的課程,其中學習到了頻繁模式挖掘這一章,這章介紹了三種算法,Apriori、FP-Growth和Eclat算法;因爲對於不一樣的數據來講,這三種算法的表現不一樣,因此咱們本次就對這三種算法在不一樣狀況下的效率進行對比。從而得出適合相應算法的狀況。html
其中相應的算法原理在以前的博客中都有很是詳細的介紹,這裏就再也不贅述,這裏給出三種算法大概的介紹node
可是這裏給出每一個算法的關鍵點:git
Apriori算法原理詳細介紹:http://www.cnblogs.com/90zeng/p/apriori.htmlgithub
FP-Growth算法原理詳細介紹:http://www.cnblogs.com/datahunter/p/3903413.html算法
Eclat算法原理詳細介紹:http://www.cnblogs.com/catkins/p/5270484.html數據庫
因爲各個博客給出的算法實現並不統一,並且本人在實現《機器學習實戰》中FP-Growth算法的時候發現,在在建立FP-Tree時根據headTable中元素的支持度順序的排序過程當中,這個地方的排序方法寫的有問題,當在模式稠密時,具備不少支持度相同的項集,書中的代碼並無考慮着一點,因此若是遇到支持度相同的項集那個就會出現必定的隨機性,致使建樹過程出錯,最後的頻繁項集結果會偏小,所以這裏對改錯誤進行了糾正,在支持度相同時,添加了按照項集排序的規則,這樣創建的FP-Tree才徹底正確。app
1 # -*- coding: utf-8 -*- 2 ''' 3 @author: Infaraway 4 @time: 2017/4/15 12:54 5 @Function: 6 ''' 7 8 9 def init_c1(data_set_dict, min_support): 10 c1 = [] 11 freq_dic = {} 12 for trans in data_set_dict: 13 for item in trans: 14 freq_dic[item] = freq_dic.get(item, 0) + data_set_dict[trans] 15 # 優化初始的集合,使不知足最小支持度的直接排除 16 c1 = [[k] for (k, v) in freq_dic.iteritems() if v >= min_support] 17 c1.sort() 18 return map(frozenset, c1) 19 20 21 def scan_data(data_set, ck, min_support, freq_items): 22 """ 23 計算Ck中的項在數據集合中的支持度,剪枝過程 24 :param data_set: 25 :param ck: 26 :param min_support: 最小支持度 27 :param freq_items: 存儲知足支持度的頻繁項集 28 :return: 29 """ 30 ss_cnt = {} 31 # 每次遍歷全體數據集 32 for trans in data_set: 33 for item in ck: 34 # 對每個候選項集, 檢查是不是 term中的一部分(子集),即候選項可否獲得支持 35 if item.issubset(trans): 36 ss_cnt[item] = ss_cnt.get(item, 0) + 1 37 ret_list = [] 38 for key in ss_cnt: 39 support = ss_cnt[key] # 每一個項的支持度 40 if support >= min_support: 41 ret_list.insert(0, key) # 將知足最小支持度的項存入集合 42 freq_items[key] = support # 43 return ret_list 44 45 46 def apriori_gen(lk, k): 47 """ 48 由Lk的頻繁項集生成新的候選項集 鏈接過程 49 :param lk: 頻繁項集集合 50 :param k: k 表示集合中所含的元素個數 51 :return: 候選項集集合 52 """ 53 ret_list = [] 54 for i in range(len(lk)): 55 for j in range(i+1, len(lk)): 56 l1 = list(lk[i])[:k-2] 57 l2 = list(lk[j])[:k-2] 58 l1.sort() 59 l2.sort() 60 if l1 == l2: 61 ret_list.append(lk[i] | lk[j]) # 求並集 62 # retList.sort() 63 return ret_list 64 65 66 def apriori_zc(data_set, data_set_dict, min_support=5): 67 """ 68 Apriori算法過程 69 :param data_set: 數據集 70 :param min_support: 最小支持度,默認值 0.5 71 :return: 72 """ 73 c1 = init_c1(data_set_dict, min_support) 74 data = map(set, data_set) # 將dataSet集合化,以知足scanD的格式要求 75 freq_items = {} 76 l1 = scan_data(data, c1, min_support, freq_items) # 構建初始的頻繁項集 77 l = [l1] 78 # 最初的L1中的每一個項集含有一個元素,新生成的項集應該含有2個元素,因此 k=2 79 k = 2 80 while len(l[k - 2]) > 0: 81 ck = apriori_gen(l[k - 2], k) 82 lk = scan_data(data, ck, min_support, freq_items) 83 l.append(lk) 84 k += 1 # 新生成的項集中的元素個數應不斷增長 85 return freq_items
1)FP_Growth文件:機器學習
在create_tree()函數中修改《機器學習實戰》中的代碼:ide
##############################################################################################
# 這裏修改機器學習實戰中的排序代碼:
ordered_items = [v[0] for v in sorted(local_data.items(), key=lambda kv: (-kv[1], kv[0]))]
##############################################################################################
1 # -*- coding: utf-8 -*- 2 """ 3 @author: Infaraway 4 @time: 2017/4/15 16:07 5 @Function: 6 """ 7 from DataMining.Unit6_FrequentPattern.FP_Growth.TreeNode import treeNode 8 9 10 def create_tree(data_set, min_support=1): 11 """ 12 建立FP樹 13 :param data_set: 數據集 14 :param min_support: 最小支持度 15 :return: 16 """ 17 freq_items = {} # 頻繁項集 18 for trans in data_set: # 第一次遍歷數據集 19 for item in trans: 20 freq_items[item] = freq_items.get(item, 0) + data_set[trans] 21 22 header_table = {k: v for (k, v) in freq_items.iteritems() if v >= min_support} # 建立頭指針表 23 # for key in header_table: 24 # print key, header_table[key] 25 26 # 無頻繁項集 27 if len(header_table) == 0: 28 return None, None 29 for k in header_table: 30 header_table[k] = [header_table[k], None] # 添加頭指針表指向樹中的數據 31 # 建立樹過程 32 ret_tree = treeNode('Null Set', 1, None) # 根節點 33 34 # 第二次遍歷數據集 35 for trans, count in data_set.items(): 36 local_data = {} 37 for item in trans: 38 if header_table.get(item, 0): 39 local_data[item] = header_table[item][0] 40 if len(local_data) > 0: 41 ############################################################################################## 42 # 這裏修改機器學習實戰中的排序代碼: 43 ordered_items = [v[0] for v in sorted(local_data.items(), key=lambda kv: (-kv[1], kv[0]))] 44 ############################################################################################## 45 update_tree(ordered_items, ret_tree, header_table, count) # populate tree with ordered freq itemset 46 return ret_tree, header_table 47 48 49 def update_tree(items, in_tree, header_table, count): 50 ''' 51 :param items: 元素項 52 :param in_tree: 檢查當前節點 53 :param header_table: 54 :param count: 55 :return: 56 ''' 57 if items[0] in in_tree.children: # check if ordered_items[0] in ret_tree.children 58 in_tree.children[items[0]].increase(count) # incrament count 59 else: # add items[0] to in_tree.children 60 in_tree.children[items[0]] = treeNode(items[0], count, in_tree) 61 if header_table[items[0]][1] is None: # update header table 62 header_table[items[0]][1] = in_tree.children[items[0]] 63 else: 64 update_header(header_table[items[0]][1], in_tree.children[items[0]]) 65 if len(items) > 1: # call update_tree() with remaining ordered items 66 update_tree(items[1::], in_tree.children[items[0]], header_table, count) 67 68 69 def update_header(node_test, target_node): 70 ''' 71 :param node_test: 72 :param target_node: 73 :return: 74 ''' 75 while node_test.node_link is not None: # Do not use recursion to traverse a linked list! 76 node_test = node_test.node_link 77 node_test.node_link = target_node 78 79 80 def ascend_tree(leaf_node, pre_fix_path): 81 ''' 82 遍歷父節點,找到路徑 83 :param leaf_node: 84 :param pre_fix_path: 85 :return: 86 ''' 87 if leaf_node.parent is not None: 88 pre_fix_path.append(leaf_node.name) 89 ascend_tree(leaf_node.parent, pre_fix_path) 90 91 92 def find_pre_fix_path(base_pat, tree_node): 93 ''' 94 建立前綴路徑 95 :param base_pat: 頻繁項 96 :param treeNode: FP樹中對應的第一個節點 97 :return: 98 ''' 99 # 條件模式基 100 cond_pats = {} 101 while tree_node is not None: 102 pre_fix_path = [] 103 ascend_tree(tree_node, pre_fix_path) 104 if len(pre_fix_path) > 1: 105 cond_pats[frozenset(pre_fix_path[1:])] = tree_node.count 106 tree_node = tree_node.node_link 107 return cond_pats 108 109 110 def mine_tree(in_tree, header_table, min_support, pre_fix, freq_items): 111 ''' 112 挖掘頻繁項集 113 :param in_tree: 114 :param header_table: 115 :param min_support: 116 :param pre_fix: 117 :param freq_items: 118 :return: 119 ''' 120 # 從小到大排列table中的元素,爲遍歷尋找頻繁集合使用 121 bigL = [v[0] for v in sorted(header_table.items(), key=lambda p: p[1])] # (sort header table) 122 for base_pat in bigL: # start from bottom of header table 123 new_freq_set = pre_fix.copy() 124 new_freq_set.add(base_pat) 125 # print 'finalFrequent Item: ',new_freq_set #append to set 126 if len(new_freq_set) > 0: 127 freq_items[frozenset(new_freq_set)] = header_table[base_pat][0] 128 cond_patt_bases = find_pre_fix_path(base_pat, header_table[base_pat][1]) 129 my_cond_tree, my_head = create_tree(cond_patt_bases, min_support) 130 # print 'head from conditional tree: ', my_head 131 if my_head is not None: # 3. mine cond. FP-tree 132 # print 'conditional tree for: ',new_freq_set 133 # my_cond_tree.disp(1) 134 mine_tree(my_cond_tree, my_head, min_support, new_freq_set, freq_items) 135 136 137 def fp_growth(data_set, min_support=1): 138 my_fp_tree, my_header_tab = create_tree(data_set, min_support) 139 # my_fp_tree.disp() 140 freq_items = {} 141 mine_tree(my_fp_tree, my_header_tab, min_support, set([]), freq_items) 142 return freq_items
2)treeNode對象文件函數
1 # -*- coding: utf-8 -*- 2 ''' 3 @author: Infaraway 4 @time: 2017/3/31 0:14 5 @Function: 6 ''' 7 8 9 class treeNode: 10 def __init__(self, name_value, num_occur, parent_node): 11 self.name = name_value # 節點元素名稱 12 self.count = num_occur # 出現的次數 13 self.node_link = None # 指向下一個類似節點的指針,默認爲None 14 self.parent = parent_node # 指向父節點的指針 15 self.children = {} # 指向孩子節點的字典 子節點的元素名稱爲鍵,指向子節點的指針爲值 16 17 def increase(self, num_occur): 18 """ 19 增長節點的出現次數 20 :param num_occur: 增長數量 21 :return: 22 """ 23 self.count += num_occur 24 25 def disp(self, ind=1): 26 print ' ' * ind, self.name, ' ', self.count 27 for child in self.children.values(): 28 child.disp(ind + 1)
1 # -*- coding: utf-8 -*- 2 """ 3 @author: Infaraway 4 @time: 2017/4/15 19:33 5 @Function: 6 """ 7 8 import sys 9 import time 10 type = sys.getfilesystemencoding() 11 12 13 def eclat(prefix, items, min_support, freq_items): 14 while items: 15 # 初始遍歷單個的元素是不是頻繁 16 key, item = items.pop() 17 key_support = len(item) 18 if key_support >= min_support: 19 # print frozenset(sorted(prefix+[key])) 20 freq_items[frozenset(sorted(prefix+[key]))] = key_support 21 suffix = [] # 存儲當前長度的項集 22 for other_key, other_item in items: 23 new_item = item & other_item # 求和其餘集合求交集 24 if len(new_item) >= min_support: 25 suffix.append((other_key, new_item)) 26 eclat(prefix+[key], sorted(suffix, key=lambda item: len(item[1]), reverse=True), min_support, freq_items) 27 return freq_items 28 29 30 def eclat_zc(data_set, min_support=1): 31 """ 32 Eclat方法 33 :param data_set: 34 :param min_support: 35 :return: 36 """ 37 # 將數據倒排 38 data = {} 39 trans_num = 0 40 for trans in data_set: 41 trans_num += 1 42 for item in trans: 43 if item not in data: 44 data[item] = set() 45 data[item].add(trans_num) 46 freq_items = {} 47 freq_items = eclat([], sorted(data.items(), key=lambda item: len(item[1]), reverse=True), min_support, freq_items) 48 return freq_items
這樣咱們就統一了三種算法的調用以及返回值,如今咱們能夠開始試驗階段了,咱們在試驗階段分別根據最小支持度閾值和數據規模的變化來判斷這三種算法的效率:
首先咱們先統一調用者三個算法:
1 def test_fp_growth(minSup, dataSetDict, dataSet): 2 freqItems = fp_growth(dataSetDict, minSup) 3 freqItems = sorted(freqItems.iteritems(), key=lambda item: item[1]) 4 return freqItems 5 6 7 def test_apriori(minSup, dataSetDict, dataSet): 8 freqItems = apriori_zc(dataSet, dataSetDict, minSup) 9 freqItems = sorted(freqItems.iteritems(), key=lambda item: item[1]) 10 return freqItems 11 12 13 def test_eclat(minSup, dataSetDict, dataSet): 14 freqItems = eclat_zc(dataSet, minSup) 15 freqItems = sorted(freqItems.iteritems(), key=lambda item: item[1]) 16 return freqItems
而後實現數據規模變化的效率改變
1 def do_experiment_min_support(): 2 3 data_name = 'unixData8_pro.txt' 4 x_name = "Min_Support" 5 data_num = 1500 6 minSup = data_num / 6 7 8 dataSetDict, dataSet = loadDblpData(open("dataSet/" + data_name), ',', data_num) 9 step = minSup / 5 # ################################################################# 10 all_time = [] 11 x_value = [] 12 for k in range(5): 13 14 x_value.append(minSup) # ################################################################# 15 if minSup < 0: # ################################################################# 16 break 17 time_fp = 0 18 time_et = 0 19 time_ap = 0 20 freqItems_fp = {} 21 freqItems_eclat = {} 22 freqItems_ap = {} 23 for i in range(10): 24 ticks0 = time.time() 25 freqItems_fp = test_fp_growth(minSup, dataSetDict, dataSet) 26 time_fp += time.time() - ticks0 27 ticks0 = time.time() 28 freqItems_eclat = test_eclat(minSup, dataSetDict, dataSet) 29 time_et += time.time() - ticks0 30 ticks0 = time.time() 31 freqItems_ap = test_apriori(minSup, dataSetDict, dataSet) 32 time_ap += time.time() - ticks0 33 print "minSup :", minSup, " data_num :", data_num, \ 34 " freqItems_fp:", len(freqItems_fp), " freqItems_eclat:", len(freqItems_eclat), " freqItems_ap:", len( 35 freqItems_ap) 36 print "fp_growth:", time_fp / 10, " eclat:", time_et / 10, " apriori:", time_ap / 10 37 # print_freqItems("show", freqItems_eclat) 38 minSup -= step # ################################################################# 39 use_time = [time_fp / 10, time_et / 10, time_ap / 10] 40 all_time.append(use_time) 41 # print use_time 42 y_value = [] 43 for i in range(len(all_time[0])): 44 tmp = [] 45 for j in range(len(all_time)): 46 tmp.append(all_time[j][i]) 47 y_value.append(tmp) 48 plot_pic(x_value, y_value, data_name, x_name) 49 return x_value, y_value
而後實現最小支持度變化的效率改變
1 def do_experiment_data_size(): 2 3 data_name = 'kosarakt.txt' 4 x_name = "Data_Size" 5 data_num = 200000 6 7 step = data_num / 5 # ################################################################# 8 all_time = [] 9 x_value = [] 10 for k in range(5): 11 minSup = data_num * 0.010 12 dataSetDict, dataSet = loadDblpData(open("dataSet/"+data_name), ' ', data_num) 13 x_value.append(data_num) # ################################################################# 14 if data_num < 0: # ################################################################# 15 break 16 time_fp = 0 17 time_et = 0 18 time_ap = 0 19 freqItems_fp = {} 20 freqItems_eclat = {} 21 freqItems_ap = {} 22 for i in range(2): 23 ticks0 = time.time() 24 freqItems_fp = test_fp_growth(minSup, dataSetDict, dataSet) 25 time_fp += time.time() - ticks0 26 ticks0 = time.time() 27 freqItems_eclat = test_eclat(minSup, dataSetDict, dataSet) 28 time_et += time.time() - ticks0 29 ticks0 = time.time() 30 # freqItems_ap = test_apriori(minSup, dataSetDict, dataSet) 31 # time_ap += time.time() - ticks0 32 print "minSup :", minSup, " data_num :", data_num, \ 33 " freqItems_fp:", len(freqItems_fp), " freqItems_eclat:", len(freqItems_eclat), " freqItems_ap:", len(freqItems_ap) 34 print "fp_growth:", time_fp / 10, " eclat:", time_et / 10, " apriori:", time_ap / 10 35 # print_freqItems("show", freqItems_eclat) 36 data_num -= step # ################################################################# 37 use_time = [time_fp / 10, time_et / 10, time_ap / 10] 38 all_time.append(use_time) 39 # print use_time 40 41 y_value = [] 42 for i in range(len(all_time[0])): 43 tmp = [] 44 for j in range(len(all_time)): 45 tmp.append(all_time[j][i]) 46 y_value.append(tmp) 47 plot_pic(x_value, y_value, data_name, x_name) 48 return x_value, y_value
同時爲了觀察方便,咱們須要對三種算法返回的結果進行繪圖
1 # -*- coding: utf-8 -*- 2 """ 3 @author: Infaraway 4 @time: 2017/4/16 20:48 5 @Function: 6 """ 7 8 import matplotlib.pyplot as plt 9 10 11 def plot_pic(x_value, y_value, title, x_name): 12 plot1 = plt.plot(x_value, y_value[0], 'r', label='Kulc') # use pylab to plot x and y 13 plot2 = plt.plot(x_value, y_value[1], 'g', label='IR') # use pylab to plot x and y 14 # plot3 = plt.plot(x_value, y_value[2], 'b', label='Apriori') # use pylab to plot x and y 15 plt.title(title) # give plot a title 16 plt.xlabel(x_name) # make axis labels 17 plt.ylabel('value ') 18 plt.legend(loc='upper right') # make legend 19 20 plt.show() # show the plot on the screen
將兩個部分統一執行:
1 if __name__ == '__main__': 2 3 # x_value, y_value = do_experiment_min_support() 4 # x_value, y_value = do_experiment_data_size() 5 # do_test()
本次實驗咱們主要從如下幾個方面來討論三種算法的效率:
數據集:unxiData8
規模:900-1500
Min_support = 1/30時 Min_support = 1/20時
數據集:kosarakt
規模:6000-10000
Min_support = 1/50 Min_support = 1/80 Min_support = 1/100
結論:通常狀況下,數據規模越大,使用Apriori算法的效率越低,由於該算法須要屢次掃描數據庫,當數據量越大時,掃描數據庫帶來的消耗越多。
數據集:unixData8
支持度:4% - 20%
Data_size = 500 Data_size = 1000 Data_size = 1500
數據集:kosarakt
支持度:1% - 2%
Data_size = 3000 Data_size = 5000 Data_size = 10000
結論:
數據集:movieItem DataSize = 943
特色:單個事務數據比較長, 大量單個條目達到500個項(但頻繁模式並不長)
Min_support = 1/4 Min_support = 1/6 Min_support = 1/8
結論:對於長事物的數據集來講
數據集:movieItem
特色:單個事物間類似度很大(致使頻繁模式特別多且比較長)
Min_support = 0.8 Min_support = 0.9
結論: 對於模式比較稠密的數據集來講,因爲會產生特別多並且比較長的模式,三種算法的效率均有降低,其中FP-Growth算法中FP樹層次較深,會產生較多的子問題,Eclat算法則須要進行大量的求交集運算,而且消耗大量的存儲空間,Apriori算法則須要更屢次的掃描數據庫,所以效率最低。
從上面實驗能夠看到,Apriori算法的效率最低,由於他須要不少次的掃描數據庫;其次FP—Growth算法在長事物數據上表現不好,由於當事物很長時,樹的深度也很大,須要求解的子問題就變得特別多,所以效率會迅速降低;Eclat算法的效率最高,可是因爲咱們事先使用了遞歸的思想,當數據量很大的時候會給系統帶來巨大的負擔,所以不適合數據量很大的狀況;固然有一種叫作diffset的技術能夠解決Eclat算法的不足,這裏咱們不作討論!
本次實驗的全部代碼和數據下載:連接:http://pan.baidu.com/s/1jHAT7cq 密碼:21pb