TF-IDF(Term Frequency/Inverse Document Frequency)是信息檢索領域很是重要的搜索詞重要性度量;用以衡量一個關鍵詞\(w\)對於查詢(Query,可看做文檔)所能提供的信息。詞頻(Term Frequency, TF)表示關鍵詞\(w\)在文檔\(D_i\)中出現的頻率:html
\[ TF_{w,D_i}= \frac {count(w)} {\left| D_i \right|} \]python
其中,\(count(w)\)爲關鍵詞\(w\)的出現次數,\(\left| D_i \right|\)爲文檔\(D_i\)中全部詞的數量。逆文檔頻率(Inverse Document Frequency, IDF)反映關鍵詞的廣泛程度——當一個詞越廣泛(即有大量文檔包含這個詞)時,其IDF值越低;反之,則IDF值越高。IDF定義以下:git
\[ IDF_w=\log \frac {N}{\sum_{i=1}^N I(w,D_i)} \]github
其中,\(N\)爲全部的文檔總數,\(I(w,D_i)\)表示文檔\(D_i\)是否包含關鍵詞,若包含則爲1,若不包含則爲0。若詞\(w\)在全部文檔中均未出現,則IDF公式中的分母爲0;所以須要對IDF作平滑(smooth):app
\[ IDF_w=\log \frac {N}{1+\sum_{i=1}^N I(w,D_i)} \]機器學習
關鍵詞\(w\)在文檔\(D_i\)的TF-IDF值:學習
\[ TF-IDF_{w,D_i}=TF_{w,D_i}*IDF_w \]網站
從上述定義能夠看出:spa
《TF-IDF模型的機率解釋》從機率的角度給出TF-IDF的數學解釋,《The Vector Space Model of text》爲TF-IDF的實操教程,包括TF-IDF的通常計算、正則化,以及如何使用scikit-learn(sklearn)來計算TF-IDF矩陣。code
最近碰到一個需求,挖掘行業關鍵詞;好比,IT行業的關鍵詞有:Java、Python、機器學習等。TF-IDF正好可用來作關鍵詞的抽取,詞TF-IDF值越大,則說明該詞爲關鍵詞。那麼,問題來了:如何套用TF-IDF模型呢?
爲了作關鍵詞挖掘,首先得有數據;咱們從某招聘網站爬取了20個行業招聘信息數據。而後,對數據進行分詞。咱們發現,行業關鍵詞具備領域特定性,即一個行業的關鍵詞通常不會同屬於另外幾個行業。所以,咱們每個行業的分詞結果做爲一個大doc,則doc的總數量爲20。用sklearn計算TF-IDF矩陣,取每一個行業top詞。
在上述模型套用中,由於doc總數少,發現top詞中會有一些常見詞,諸如「認真負責」、「崗位」之類。爲了過濾常見詞,採起兩個辦法:
分詞采用的jieba,若是以爲分詞效果不太理想,可採用百度詞條做爲自定義分詞詞典;TF-IDF計算依賴於sklearn;求matrix 的row top則用到了numpy。具體代碼以下:
# -*- coding: utf-8 -*- # @Time : 2016/9/6 # @Author : rain import codecs import os import jieba.analyse import numpy as np import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer base_path = "./resources/corpus/" seg_path = "./resources/segmented/" def segment(): """word segment""" for txt in os.listdir(base_path): whole_base = os.path.join(base_path, txt) whole_seg = os.path.join(seg_path, txt) with codecs.open(whole_base, 'r', 'utf-8') as fr: fw = codecs.open(whole_seg, 'w', 'utf-8') for line in fr.readlines(): # seg_list = jieba.cut(line.strip()) seg_list = jieba.analyse.extract_tags(line.strip(), topK=20, withWeight=False, allowPOS=()) fw.write(" ".join(seg_list)) fw.close() def read_doc_list(): """read segmented docs""" trade_list = [] doc_list = [] for txt in os.listdir(seg_path): trade_list.append(txt.split(".")[0]) with codecs.open(os.path.join(seg_path, txt), "r", "utf-8") as fr: doc_list.append(fr.read().replace('\n', '')) return trade_list, doc_list def tfidf_top(trade_list, doc_list, max_df, topn): vectorizer = TfidfVectorizer(max_df=max_df) matrix = vectorizer.fit_transform(doc_list) feature_dict = {v: k for k, v in vectorizer.vocabulary_.items()} # index -> feature_name top_n_matrix = np.argsort(-matrix.todense())[:, :topn] # top tf-idf words for each row df = pd.DataFrame(np.vectorize(feature_dict.get)(top_n_matrix), index=trade_list) # convert matrix to df return df segment() tl, dl = read_doc_list() tdf = tfidf_top(tl, dl, max_df=0.3, topn=500) tdf.to_csv("./resources/keywords.txt", header=False, encoding='utf-8')