CRF,英文全稱爲conditional random field, 中文名爲條件隨機場,是給定一組輸入隨機變量條件下另外一組輸出隨機變量的條件機率分佈模型,其特色是假設輸出隨機變量構成馬爾可夫(Markov)隨機場。
較爲簡單的條件隨機場是定義在線性鏈上的條件隨機場,稱爲線性鏈條件隨機場(linear chain conditional random field). 線性鏈條件隨機場能夠用於序列標註等問題,而本文須要解決的命名實體識別(NER)任務正好可經過序列標註方法解決。這時,在條件機率模型P(Y|X)中,Y是輸出變量,表示標記序列(或狀態序列),X是輸入變量,表示須要標註的觀測序列。學習時,利用訓練數據 集經過極大似然估計或正則化的極大似然估計獲得條件機率模型p(Y|X);預測時,對於給定的輸入序列x,求出條件機率p(y|x)最大的輸出序列y0.python
命名實體識別(Named Entity Recognition,簡稱NER)是信息提取、問答系統、句法分析、機器翻譯等應用領域的重要基礎工具,在天然語言處理技術走向實用化的過程當中佔有重要地位。通常來講,命名實體識別的任務就是識別出待處理文本中三大類(實體類、時間類和數字類)、七小類(人名、機構名、地名、時間、日期、貨幣和百分比)命名實體。常見的實現NER的算法以下:git
本文不許備詳細介紹條件隨機場的原理與實現算法,關於具體的原理與實現算法,能夠參考《統計學習算法》一書。咱們將藉助已實現條件隨機場的工具——CRF++來實現命名實體識別。關於用深度學習算法來實現命名實體識別, 能夠參考文章:NLP入門(五)用深度學習實現命名實體識別(NER)。github
CRF++是著名的條件隨機場的開源工具,也是目前綜合性能最佳的CRF工具,採用C++語言編寫而成。其最重要的功能我認爲是採用了特徵模板。這樣就能夠自動生成一系列的特徵函數,而不用咱們本身生成特徵函數,咱們要作的就是尋找特徵,好比詞性等。關於CRF++的特性,能夠參考網址:http://taku910.github.io/crfpp/ 。算法
CRF++的安裝可分爲Windows環境和Linux環境下的安裝。關於Linux環境下的安裝,能夠參考文章:CRFPP/CRF++編譯安裝與部署 。 在Windows中CRF++不須要安裝,下載解壓CRF++0.58文件便可以使用,下載網址爲:https://blog.csdn.net/lilong117194/article/details/81160265 。app
以咱們本次使用的命名實體識別的語料爲例,做爲CRF++訓練的語料(前20行,每一句話以空格隔開。)以下:dom
played VBD O on IN O Monday NNP O ( ( O home NN O team NN O in IN O CAPS NNP O ) ) O : : O American NNP B-MISC League NNP I-MISC Cleveland NNP B-ORG 2 CD O DETROIT NNP B-ORG 1 CD O BALTIMORE VB B-ORG
須要注意字與標籤之間的分隔符爲製表符\t,不然會致使feature_index.cpp(86) [max_size == size] inconsistent column size錯誤。ide
模板是使用CRF++的關鍵,它能幫助咱們自動生成一系列的特徵函數,而不用咱們本身生成特徵函數,而特徵函數正是CRF算法的核心概念之一。一個簡單的模板文件以下:函數
# Unigram U00:%x[-2,0] U01:%x[0,1] U02:%x[0,0] U03:%x[1,0] U04:%x[2,0] U05:%x[-2,0]/%x[-1,0]/%x[0,0] U06:%x[-1,0]/%x[0,0]/%x[1,0] U07:%x[0,0]/%x[1,0]/%x[2,0] U08:%x[-1,0]/%x[0,0] U09:%x[0,0]/%x[1,0] # Bigram B
在這裏,咱們須要好好理解下模板文件的規則。T**:%x[#,#]中的T表示模板類型,兩個"#"分別表示相對的行偏移與列偏移。一共有兩種模板:工具
home NN O
所在行爲CURRENT TOKEN
,played VBD O on IN O Monday NNP O ( ( O home NN O << CURRENT TOKEN team NN O in IN O CAPS NNP O ) ) O : : O
那麼%x[#,#]的對應規則以下:post
template | expanded feature |
---|---|
%x[0,0] | home |
%x[0,1] | NN |
%x[-1,0] | ( |
%x[-2,1] | NNP |
%x[0,0]/%x[0,1] | home/NN |
ABC%x[0,1]123 | ABCNN123 |
以「U01:%x[0,1]」爲例,它在該語料中生成的示例函數以下:
func1 = if (output = O and feature="U01:NN") return 1 else return 0
func2 = if (output = O and feature="U01:N") return 1 else return 0
func3 = if (output = O and feature="U01:NNP") return 1 else return 0
....
CRF++的訓練命令通常格式以下:
crf_learn -f 3 -c 4.0 template train.data model -t
其中,template爲模板文件,train.data爲訓練語料,-t表示能夠獲得一個model文件和一個model.txt文件,其餘可選參數說明以下:
-f, –freq=INT使用屬性的出現次數很多於INT(默認爲1) -m, –maxiter=INT設置INT爲LBFGS的最大迭代次數 (默認10k) -c, –cost=FLOAT 設置FLOAT爲代價參數,過大會過分擬合 (默認1.0) -e, –eta=FLOAT設置終止標準FLOAT(默認0.0001) -C, –convert將文本模式轉爲二進制模式 -t, –textmodel爲調試創建文本模型文件 -a, –algorithm=(CRF|MIRA) 選擇訓練算法,默認爲CRF-L2 -p, –thread=INT線程數(默認1),利用多個CPU減小訓練時間 -H, –shrinking-size=INT 設置INT爲最適宜的跌代變量次數 (默認20) -v, –version顯示版本號並退出 -h, –help顯示幫助並退出
在訓練過程當中,會輸出一些信息,其意義以下:
iter:迭代次數。當迭代次數達到maxiter時,迭代終止 terr:標記錯誤率 serr:句子錯誤率 obj:當前對象的值。當這個值收斂到一個肯定值的時候,訓練完成 diff:與上一個對象值之間的相對差。當此值低於eta時,訓練完成
在訓練完模型後,咱們可使用訓練好的模型對新數據進行預測,預測命令格式以下:
crf_test -m model NER_predict.data > predict.txt
-m model
表示使用咱們剛剛訓練好的model模型,預測的數據文件爲NER_predict.data, > predict.txt
表示將預測後的數據寫入到predict.txt中。
接下來,咱們將利用CRF++來實現英文命名實體識別功能。
本項目實現NER的語料庫以下(文件名爲train.txt,一共42000行,這裏只展現前15行,能夠在文章最後的Github地址下載該語料庫):
played on Monday ( home team in CAPS ) :
VBD IN NNP ( NN NN IN NNP ) :
O O O O O O O O O O
American League
NNP NNP
B-MISC I-MISC
Cleveland 2 DETROIT 1
NNP CD NNP CD
B-ORG O B-ORG O
BALTIMORE 12 Oakland 11 ( 10 innings )
VB CD NNP CD ( CD NN )
B-ORG O B-ORG O O O O O
TORONTO 5 Minnesota 3
TO CD NNP CD
B-ORG O B-ORG O
......
簡單介紹下該語料庫的結構:該語料庫一共42000行,每三行爲一組,其中,第一行爲英語句子,第二行爲句子中每一個單詞的詞性,第三行爲NER系統的標註,共分4個標註類別:PER(人名),LOC(位置),ORG(組織)以及MISC,其中B表示開始,I表示中間,O表示單字詞,不計入NER,sO表示特殊單字詞。
首先咱們將該語料分爲訓練集和測試集,比例爲9:1,實現的Python代碼以下:
# -*- coding: utf-8 -*- # NER預料train.txt所在的路徑 dir = "/Users/Shared/CRF_4_NER/CRF_TEST" with open("%s/train.txt" % dir, "r") as f: sents = [line.strip() for line in f.readlines()] # 訓練集與測試集的比例爲9:1 RATIO = 0.9 train_num = int((len(sents)//3)*RATIO) # 將文件分爲訓練集與測試集 with open("%s/NER_train.data" % dir, "w") as g: for i in range(train_num): words = sents[3*i].split('\t') postags = sents[3*i+1].split('\t') tags = sents[3*i+2].split('\t') for word, postag, tag in zip(words, postags, tags): g.write(word+' '+postag+' '+tag+'\n') g.write('\n') with open("%s/NER_test.data" % dir, "w") as h: for i in range(train_num+1, len(sents)//3): words = sents[3*i].split('\t') postags = sents[3*i+1].split('\t') tags = sents[3*i+2].split('\t') for word, postag, tag in zip(words, postags, tags): h.write(word+' '+postag+' '+tag+'\n') h.write('\n') print('OK!')
運行此程序,獲得NER_train.data, 此爲訓練集數據,NER_test.data,此爲測試集數據。NER_train.data的前20行數據以下(以\t分隔開來):
played VBD O on IN O Monday NNP O ( ( O home NN O team NN O in IN O CAPS NNP O ) ) O : : O American NNP B-MISC League NNP I-MISC Cleveland NNP B-ORG 2 CD O DETROIT NNP B-ORG 1 CD O BALTIMORE VB B-ORG
咱們使用的模板文件template內容以下:
# Unigram U00:%x[-2,0] U01:%x[-1,0] U02:%x[0,0] U03:%x[1,0] U04:%x[2,0] U05:%x[-1,0]/%x[0,0] U06:%x[0,0]/%x[1,0] U10:%x[-2,1] U11:%x[-1,1] U12:%x[0,1] U13:%x[1,1] U14:%x[2,1] U15:%x[-2,1]/%x[-1,1] U16:%x[-1,1]/%x[0,1] U17:%x[0,1]/%x[1,1] U18:%x[1,1]/%x[2,1] U20:%x[-2,1]/%x[-1,1]/%x[0,1] U21:%x[-1,1]/%x[0,1]/%x[1,1] U22:%x[0,1]/%x[1,1]/%x[2,1] # Bigram B
接着訓練該數據,命令以下:
crf_learn -c 3.0 template NER_train.data model -t
運行時的輸出信息以下:
在筆者的電腦上一共迭代了193次,運行時間爲490.32秒,標記錯誤率爲0.00004,句子錯誤率爲0.00056。
接着,咱們須要在測試集上對該模型的預測表現作評估。預測命令以下:
crf_test -m model NER_test.data > result.txt
使用Python腳本統計預測的準確率,代碼以下:
# -*- coding: utf-8 -*- dir = "/Users/Shared/CRF_4_NER/CRF_TEST" with open("%s/result.txt" % dir, "r") as f: sents = [line.strip() for line in f.readlines() if line.strip()] total = len(sents) print(total) count = 0 for sent in sents: words = sent.split() # print(words) if words[-1] == words[-2]: count += 1 print("Accuracy: %.4f" %(count/total)) # 0.9706
輸出的結果以下:
21487 Accuracy: 0.9706
因而可知,在測試集上的準確率高達0.9706,效果至關好。
最後,咱們對新數據進行命名實體識別,看看模型在新數據上的識別效果。實現的Python代碼以下:
# -*- coding: utf-8 -*- import os import nltk dir = "/Users/Shared/CRF_4_NER/CRF_TEST" sentence = "Venezuelan opposition leader and self-proclaimed interim president Juan Guaidó said Thursday he will return to his country by Monday, and that a dialogue with President Nicolas Maduro won't be possible without discussing elections." #sentence = "Real Madrid's season on the brink after 3-0 Barcelona defeat" # sentence = "British artist David Hockney is known as a voracious smoker, but the habit got him into a scrape in Amsterdam on Wednesday." # sentence = "India is waiting for the release of an pilot who has been in Pakistani custody since he was shot down over Kashmir on Wednesday, a goodwill gesture which could defuse the gravest crisis in the disputed border region in years." # sentence = "Instead, President Donald Trump's second meeting with North Korean despot Kim Jong Un ended in a most uncharacteristic fashion for a showman commander in chief: fizzle." # sentence = "And in a press conference at the Civic Leadership Academy in Queens, de Blasio said the program is already working." #sentence = "The United States is a founding member of the United Nations, World Bank, International Monetary Fund." default_wt = nltk.word_tokenize # 分詞 words = default_wt(sentence) print(words) postags = nltk.pos_tag(words) print(postags) with open("%s/NER_predict.data" % dir, 'w', encoding='utf-8') as f: for item in postags: f.write(item[0]+' '+item[1]+' O\n') print("write successfully!") os.chdir(dir) os.system("crf_test -m model NER_predict.data > predict.txt") print("get predict file!") # 讀取預測文件redict.txt with open("%s/predict.txt" % dir, 'r', encoding='utf-8') as f: sents = [line.strip() for line in f.readlines() if line.strip()] word = [] predict = [] for sent in sents: words = sent.split() word.append(words[0]) predict.append(words[-1]) # print(word) # print(predict) # 去掉NER標註爲O的元素 ner_reg_list = [] for word, tag in zip(word, predict): if tag != 'O': ner_reg_list.append((word, tag)) # 輸出模型的NER識別結果 print("NER識別結果:") if ner_reg_list: for i, item in enumerate(ner_reg_list): if item[1].startswith('B'): end = i+1 while end <= len(ner_reg_list)-1 and ner_reg_list[end][1].startswith('I'): end += 1 ner_type = item[1].split('-')[1] ner_type_dict = {'PER': 'PERSON: ', 'LOC': 'LOCATION: ', 'ORG': 'ORGANIZATION: ', 'MISC': 'MISC: ' } print(ner_type_dict[ner_type], ' '.join([item[0] for item in ner_reg_list[i:end]]))
識別的結果以下:
MISC: Venezuelan PERSON: Juan Guaidó PERSON: Nicolas Maduro
識別有個地方不許確, Venezuelan應該是LOC,而不是MISC. 咱們再接着測試其它的新數據:
輸入語句1:
Real Madrid's season on the brink after 3-0 Barcelona defeat
識別效果1:
ORGANIZATION: Real Madrid
LOCATION: Barcelona
輸入語句2:
British artist David Hockney is known as a voracious smoker, but the habit got him into a scrape in Amsterdam on Wednesday.
識別效果2:
MISC: British
PERSON: David Hockney
LOCATION: Amsterdam
輸入語句3:
India is waiting for the release of an pilot who has been in Pakistani custody since he was shot down over Kashmir on Wednesday, a goodwill gesture which could defuse the gravest crisis in the disputed border region in years.
識別效果3:
LOCATION: India
LOCATION: Pakistani
LOCATION: Kashmir
輸入語句4:
Instead, President Donald Trump's second meeting with North Korean despot Kim Jong Un ended in a most uncharacteristic fashion for a showman commander in chief: fizzle.
識別效果4:
PERSON: Donald Trump
PERSON: Kim Jong Un
輸入語句5:
And in a press conference at the Civic Leadership Academy in Queens, de Blasio said the program is already working.
識別效果5:
ORGANIZATION: Civic Leadership Academy
LOCATION: Queens
PERSON: de Blasio
輸入語句6:
The United States is a founding member of the United Nations, World Bank, International Monetary Fund.
識別效果6:
LOCATION: United States
ORGANIZATION: United Nations
PERSON: World Bank
ORGANIZATION: International Monetary Fund
在這些例子中,有讓咱們驚喜之處:識別出了人物Donald Trump, Kim Jong Un. 但也有些不足指出,如將World Bank識別爲人物,而不是組織機構。總的來講,識別效果仍是讓人滿意的。
最近因爲工做繁忙,無暇顧及博客。但轉念一想,技術輸出也是比較重要的,須要長期堅持下去~
本項目的Github地址爲:https://github.com/percent4/CRF_4_NER 。 五一將至,祝你們假期愉快~