分類任務:如何在客服對話中,識別客戶情緒的好壞

對話情緒識別

對話情緒識別,目標是識別智能對話場景中用戶的情緒,幫助企業更全面的把握產品體驗、監控客戶服務質量,適用於聊天、客服等多種場景。python

例如在智能音箱、智能車載等場景中,識別用戶的情緒,能夠適當地進行情緒安撫,改善產品的用戶交互體驗,在智能客服場景中,能夠分析客服服務質量、下降人工質檢成本,也可以幫助企業更好地把握對話質量、提升用戶滿意度。可經過百度AI開發平臺體驗。json

從上圖能夠看到,對於用戶的對話文本(一般是語音識別後的文本),模型會判斷該文本屬於不一樣情緒類別的機率,並給出最後的情緒類別,在本案例中,對話情緒類別有三種:負向情緒(0)、中性情緒(1)和正向情緒(2),屬於短文本三分類問題。網絡

咱們先來跑一下例子,直觀感覺一下模型的輸出結果!app

In[1]
# 首先解壓數據集和預訓練的模型
!cd /home/aistudio/data/data12605/ && unzip -qo data.zip
!cd /home/aistudio/work/ && tar -zxf emotion_detection_textcnn-1.0.0.tar.gz

# 查看預測的數據
!cat /home/aistudio/data/data12605/data/infer.txt
靠 你 真是 說 廢話 
服務 態度 好 差 啊	
你 寫 過 黃山 奇石 嗎
一個一個 慢慢來
謝謝 服務 很 好
In[2]
# 修改配置,使用預訓練好的模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./textcnn' run.sh 
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json

# 模型預測,並查看結果
!cd /home/aistudio/work/ && sh run.sh infer
Load model from ./textcnn
Final infer result:
0	0.992887	0.003744	0.003369
0	0.677892	0.229147	0.092961
1	0.001657	0.997380	0.000963
1	0.003413	0.990708	0.005880
2	0.014995	0.104129	0.880875
[infer] elapsed time: 0.014017 s
 

訓練實踐

介紹分類的經常使用評價指標,如何準備數據,定義分類模型,而後快速進行對話情緒識別模型的訓練、評估和預測。dom

評價指標

分類模型的評價指標一般有 Accuracy、Precision、Recall 和 F1。機器學習

  • 準確率 Accuracy = 正確分類的樣本數 / 總樣本數。
  • 精確率 Precision = 預測爲正類而且正確的樣本數 / 預測爲正類的樣本數。
  • 召回率 Recall = 預測爲正類而且正確的樣本數 / 標註爲正類的樣本數。
  • 綜合評價指標 F1:2(Precision + Recall) / (Precision*Recall),Precision 和 Recall 加權調和平均
  • 以上指標越高,則說明模型比較理想。

以二分類問題爲例,一般以關注的類爲正類,其它類爲負類,一般會有一份測試集,模型在測試集上預測的結果有4種狀況,4種狀況就造成以下的混淆矩陣:函數

所以準確率 Accuracy 定義爲:acc=TP+TNTP+FN+FP+TNacc = \frac {TP+TN} {TP+FN+FP+TN}acc=TP+FN+FP+TNTP+TN工具

精確率 Precision 定義爲:P=TPTP+FPP = \frac {TP} {TP+FP}P=TP+FPTPpost

召回率 Recall 定義爲:R=TPTP+FNR = \frac {TP} {TP+FN}R=TP+FNTP性能

綜合評價指標 F1 定義爲:F1=2(P∗R)P+RF1 = \frac {2(P*R)} {P+R}F1=P+R2(PR)

在多分類狀況下,則用宏平均(Macro-averaging)和微平均(Micro-averaging)的方法,宏平均是指先計算每一類的各項評估指標,而後再對指標求算術平均值;微平均是指先對混淆矩陣的元素進行平均,獲得TP,FP,TN,FN的平均值,而後再計算各項評估指標。

在本案例中,咱們主要使用宏平均的計算方法。

Tips:在樣本不均衡的狀況下,準確率 Accuracy 這個評價指標有很大的缺陷。好比說1萬封郵件裏有10封垃圾郵件(千分之一的機率是垃圾郵件),若是模型將全部郵件判爲非垃圾郵件,那acc有99%以上,但實際上該模型是沒意義的。這種狀況下就須要使用Precision、Recall、F1做爲評價指標。

 

數據準備

爲了訓練分類模型,通常須要準備三個數據集:訓練集train.txt、驗證集dev.txt、測試集test.txt。

  • 訓練集,用來訓練模型參數的數據集,模型直接根據訓練集來調整自身參數以得到更好的分類效果。
  • 驗證集,又稱開發集,用於在訓練過程當中檢驗模型的狀態,收斂狀況。驗證集一般用於調整超參數,根據幾組模型驗證集上的表現決定哪組超參數擁有最好的性能。
  • 測試集,用來計算模型的各項評估指標,驗證模型泛化能力。

Tips:測試集的數據通常不在訓練集中,從而用來驗證模型的效果。

這裏咱們提供一份已標註的、通過分詞預處理的機器人聊天數據集,其目錄結構以下

.
├── train.txt   # 訓練集
├── dev.txt     # 驗證集
├── test.txt    # 測試集
├── infer.txt   # 待預測數據
├── vocab.txt   # 詞典

數據由兩列組成,以製表符('\t')分隔,第一列是情緒分類的類別(0表示負向情緒;1表示中性情緒;2表示正向情緒),第二列是以空格分詞的中文文本,以下示例,文件爲 utf8 編碼。

label   text_a
0   誰 罵人 了 ? 我 歷來 不 罵人 , 我 罵 的 都 不是 人 , 你 是 人 嗎 ?
1   我 有事 等會兒 就 回來 和 你 聊
2   我 見到 你 很 高興 謝謝 你 幫 我
 

分類模型選擇

傳統的機器學習分類方法,須要設置不少人工特徵,例如單詞的個數、文本的長度、單詞的詞性等等,而隨着深度學習的發展,不少分類模型的效果獲得驗證和使用,包括BOW、CNN、RNN、BiLSTM等,其特色是不用設計人工特徵,而是基於詞向量(word embedding)進行表示學習。

這裏咱們以 CNN 模型爲例,介紹如何使用 PaddlePaddle 定義網絡結構,更多的模型介紹細節在概念解釋章節。

網絡的配置以下,其中網絡的輸入dict_dim表示的是詞典的大小,class_dim表示類別數,這裏咱們是3。

In[3]
import paddle
import paddle.fluid as fluid

# 定義cnn模型
# 其中class_dim表示分類的類別數,win_sizes表示使用卷積核窗口大小
def cnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_size=3, is_prediction=False):
    """ Conv net """
    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    conv_3 = fluid.nets.sequence_conv_pool(
        input=emb,
        num_filters=hid_dim,
        filter_size=win_size,
        act="tanh",
        pool_type="max")

    # full connect layer
    fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction
    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)

    return avg_cost, prediction
 

定義網絡結構後,須要定義訓練和預測程序、優化函數、數據提供器等,爲了便於學習,咱們將模型訓練、評估、預測的過程封裝成 run.sh 腳本。

模型訓練

基於示例的數據集,能夠運行下面的命令,在訓練集(train.txt)上進行模型訓練,並在驗證集(dev.txt)驗證。

In[6]
# 修改配置,選擇cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json

# 修改訓練後模型保存的路徑
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/cnn' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:20:44.030146   271 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:20:44.033725   271 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.350119, avg acc: 0.875000, speed: 172.452770 steps/s
[dev evaluation] avg loss: 0.319571, avg acc: 0.874074, elapsed time: 0.048704 s
step: 400, avg loss: 0.296501, avg acc: 0.890625, speed: 65.212972 steps/s
[dev evaluation] avg loss: 0.230635, avg acc: 0.914815, elapsed time: 0.073480 s
step: 600, avg loss: 0.319913, avg acc: 0.875000, speed: 63.960171 steps/s
[dev evaluation] avg loss: 0.176513, avg acc: 0.938889, elapsed time: 0.054020 s
step: 756, avg loss: 0.168574, avg acc: 0.947368, speed: 70.363845 steps/s
[dev evaluation] avg loss: 0.144825, avg acc: 0.948148, elapsed time: 0.056827 s
 

訓練完成後,會在./save_models/cnn 目錄下生成以 step_xxx 命名的模型目錄。

模型評估

利用訓練後的模型step_756,能夠運行下面的命令進行測試,查看預訓練的模型在測試集(test.txt)上的評測結果

In[7]
# 確保使用的模型爲CNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
# 使用剛纔訓練的cnn模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/cnn/step_756
Final test result:
[test evaluation] accuracy: 0.866924, macro precision: 0.790397, recall: 0.714859, f1: 0.743252, elapsed time: 0.048996 s
 

模型預測

利用已有模型,可在未知label的數據集(infer.txt)上進行預測,獲得模型預測結果及各label的機率。

In[8]
# 查看預測的數據
!cat /home/aistudio/data/data12605/data/infer.txt

# 使用剛纔訓練的cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型預測
!cd /home/aistudio/work/ && sh run.sh infer
靠 你 真是 說 廢話 
服務 態度 好 差 啊	
你 寫 過 黃山 奇石 嗎
一個一個 慢慢來
謝謝 服務 很 好 
Load model from ./save_models/cnn/step_756
Final infer result:
0	0.969470	0.000457	0.030072
0	0.434887	0.183004	0.382110
1	0.000057	0.999915	0.000028
1	0.000312	0.999080	0.000607
2	0.164429	0.002141	0.833429
[infer] elapsed time: 0.009522 s
 

概念解釋

CNN-卷積神經網絡

卷積神經網絡(Convolution Neural Network, CNN)最先使用於圖像領域,一般有多個卷積層+池化層組成,最後再拼接全鏈接層作分類。卷積層主要是執行卷積操做提取圖片底層到高層的特徵,池化層主要是執行降採樣操做,能夠過濾掉一些不重要的高頻信息。(降採樣是圖像處理中常見的一種操做)

什麼是卷積

綠色表示輸入的圖像,能夠是一張黑白圖片,0是黑色像素點,1是白色像素點。黃色就卷積核(kernal),也叫過濾器(filter)或特徵檢測器(feature detector),經過卷積,對圖片的像素點進行加權,做爲這局部像素點的響應,得到圖像的某種特徵。

卷積的過程,就是滑動這個黃色的矩陣,以必定的步長向右和向下移動,從而獲得整個圖像的特徵表示。

舉個例子:上圖中輸入的綠色矩陣表示一張人臉,黃色矩陣表示一個眼睛,卷積過程就是拿這個眼睛去匹配這張人臉,那麼當黃色矩陣匹配到綠色矩陣(人臉)中眼睛部分時,對應的響應就會很大,獲得的值就越大。

什麼是池化

前面卷積的過程,實際上"重疊"計算了不少冗餘的信息,池化就是對卷積後的特徵進行篩選,提取關鍵信息,過濾掉一些噪音,一般用的是max pooling和mean pooling。

文本卷積神經網絡

這裏介紹的主要是文本卷積神經網絡,首先咱們將輸入的query表示層詞向量序列,而後使用卷積去處理輸入的詞向量序列,就會產生一個特徵圖(feature map),對特徵圖採用時間維度上的最大池化(max pooling over time)操做,就獲得此卷積覈對應的整句話的特徵,最後,將全部卷積核獲得的特徵拼接起來即爲文本的定長向量表示,對於文本分類問題,將其鏈接至softmax即構建出完整的模型。

在實際應用中,咱們會使用多個卷積核來處理句子,窗口大小相同的卷積核堆疊起來造成一個矩陣,這樣能夠更高效的完成運算。另外,咱們也可以使用窗口大小不一樣的卷積核來處理句子,如上圖,不一樣顏色表示不一樣大小的卷積核操做。

 

進階使用

模型優化上,咱們通常會使用表達能力更強的模型,或者使用finetune。這裏首先咱們使用窗口大小不一樣的卷積核TextCNN模型進行實驗,而後介紹如何基於預訓練的模型進行finetune

TextCNN模型實驗

BOW詞袋模型,會忽略其詞順序、語法和句法,存在必定的缺陷,因此對於通常的短文本分類問題,常使用上文所述的文本卷積網絡,它在考慮詞順序的基礎上把文本映射到低維度的語義空間,而且以端對端(end to end)的方式進行文本表示及分類,其性能相對於傳統方法有顯著的提高。

In[9]
# 定義textcnn模型
# 其中class_dim表示分類的類別數,win_sizes表示使用卷積核窗口大小
def textcnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_sizes=None, is_prediction=False):
    """ Textcnn_net """
    if win_sizes is None:
        win_sizes = [1, 2, 3]

    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    convs = []
    for win_size in win_sizes:
        conv_h = fluid.nets.sequence_conv_pool(
            input=emb,
            num_filters=hid_dim,
            filter_size=win_size,
            act="tanh",
            pool_type="max")
        convs.append(conv_h)
    convs_out = fluid.layers.concat(input=convs, axis=1)

    # full connect layer
    fc_1 = fluid.layers.fc(input=[convs_out], size=hid_dim2, act="tanh")
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction

    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)
    return avg_cost, prediction
 

這裏咱們進行配置的修改,包括模型的類型、初始化模型位置、模型保存路徑,而後進行模型的訓練和評估

In[10]
# 更改模型爲TextCNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json
# 修改模型保存目錄
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:21:31.529520   339 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:21:31.533326   339 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.212591, avg acc: 0.921875, speed: 104.244460 steps/s
[dev evaluation] avg loss: 0.284517, avg acc: 0.897222, elapsed time: 0.069697 s
step: 400, avg loss: 0.367220, avg acc: 0.812500, speed: 53.107965 steps/s
[dev evaluation] avg loss: 0.195091, avg acc: 0.932407, elapsed time: 0.080681 s
step: 600, avg loss: 0.242331, avg acc: 0.921875, speed: 52.311775 steps/s
[dev evaluation] avg loss: 0.139668, avg acc: 0.955556, elapsed time: 0.082921 s
step: 756, avg loss: 0.052051, avg acc: 1.000000, speed: 58.723846 steps/s
[dev evaluation] avg loss: 0.111066, avg acc: 0.962963, elapsed time: 0.082778 s
In[11]
# 使用上面訓練好的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn/step_756' run.sh

# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn/step_756
Final test result:
[test evaluation] accuracy: 0.878496, macro precision: 0.797653, recall: 0.754163, f1: 0.772353, elapsed time: 0.082577 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
 

基於預訓練的TextCNN進行Finetune

能夠經過修改run.sh中的init_checkpoint參數,加載預訓練模型來進行精調(finetune)。

In[13]
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
# 使用預訓練的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"./textcnn",#' config.json
# 修改學習率和保存的模型目錄
!cd /home/aistudio/work/ && sed -i 's#"lr":.*$#"lr":0.0001,#' config.json
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn_finetune' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:23:05.350819   418 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:23:05.354846   418 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./textcnn
step: 200, avg loss: 0.184450, avg acc: 0.953125, speed: 103.065345 steps/s
[dev evaluation] avg loss: 0.170050, avg acc: 0.937037, elapsed time: 0.074731 s
step: 400, avg loss: 0.166738, avg acc: 0.921875, speed: 47.727028 steps/s
[dev evaluation] avg loss: 0.132444, avg acc: 0.954630, elapsed time: 0.081669 s
step: 600, avg loss: 0.076735, avg acc: 0.984375, speed: 53.387034 steps/s
[dev evaluation] avg loss: 0.103549, avg acc: 0.963889, elapsed time: 0.081754 s
step: 756, avg loss: 0.061593, avg acc: 0.947368, speed: 57.990719 steps/s
[dev evaluation] avg loss: 0.086959, avg acc: 0.971296, elapsed time: 0.080616 s
In[14]
# 修改配置,使用上面訓練獲得的模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn_finetune/step_756' run.sh
# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn_finetune/step_756
Final test result:
[test evaluation] accuracy: 0.893925, macro precision: 0.829668, recall: 0.812613, f1: 0.820883, elapsed time: 0.083944 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277

能夠看出,基於預訓練模型Finetune後能取得更好的分類效果。

 

基於ERNIE模型進行Finetune

這裏咱們先下載ERNIE預訓練模型,而後運行run_ernie.sh腳本中,加載ERNIE模型來進行精調(finetune)。

In[3]
!cd /home/aistudio/work/ && mkdir -p pretrain_models/ernie
%cd /home/aistudio/work/pretrain_models/ernie
# 獲取ernie預訓練模型
!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -O ERNIE_stable-1.0.1.tar.gz
!tar -zxvf ERNIE_stable-1.0.1.tar.gz && rm ERNIE_stable-1.0.1.tar.gz
/home/aistudio/work/pretrain_models/ernie
--2020-02-27 20:17:41--  https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz
Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 182.61.200.195, 182.61.200.229
Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|182.61.200.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 374178867 (357M) [application/x-gzip]
Saving to: ‘ERNIE_stable-1.0.1.tar.gz’

ERNIE_stable-1.0.1. 100%[===================>] 356.84M  61.3MB/s    in 8.9s    

2020-02-27 20:17:50 (40.1 MB/s) - ‘ERNIE_stable-1.0.1.tar.gz’ saved [374178867/374178867]

params/
params/encoder_layer_5_multi_head_att_key_fc.w_0
params/encoder_layer_0_post_ffn_layer_norm_scale
params/encoder_layer_0_post_att_layer_norm_bias
params/encoder_layer_0_multi_head_att_value_fc.w_0
params/sent_embedding
params/encoder_layer_11_multi_head_att_query_fc.w_0
params/encoder_layer_8_ffn_fc_0.w_0
params/encoder_layer_5_ffn_fc_1.w_0
params/encoder_layer_6_ffn_fc_1.b_0
params/encoder_layer_5_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_output_fc.b_0
params/encoder_layer_4_ffn_fc_0.w_0
params/encoder_layer_4_post_ffn_layer_norm_bias
params/encoder_layer_3_ffn_fc_1.b_0
params/encoder_layer_0_multi_head_att_value_fc.b_0
params/encoder_layer_11_post_att_layer_norm_bias
params/encoder_layer_3_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_output_fc.w_0
params/encoder_layer_5_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_value_fc.w_0
params/encoder_layer_6_multi_head_att_query_fc.w_0
params/encoder_layer_8_post_att_layer_norm_bias
params/encoder_layer_2_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_key_fc.w_0
params/encoder_layer_4_multi_head_att_key_fc.w_0
params/encoder_layer_6_post_ffn_layer_norm_bias
params/encoder_layer_9_post_ffn_layer_norm_bias
params/encoder_layer_11_post_ffn_layer_norm_scale
params/encoder_layer_6_multi_head_att_value_fc.b_0
params/encoder_layer_9_ffn_fc_0.w_0
params/encoder_layer_2_post_ffn_layer_norm_scale
params/encoder_layer_1_multi_head_att_query_fc.w_0
params/encoder_layer_1_post_ffn_layer_norm_bias
params/next_sent_3cls_fc.w_0
params/encoder_layer_9_multi_head_att_key_fc.w_0
params/encoder_layer_7_multi_head_att_value_fc.w_0
params/encoder_layer_10_ffn_fc_0.b_0
params/encoder_layer_2_multi_head_att_value_fc.w_0
params/encoder_layer_8_post_ffn_layer_norm_scale
params/encoder_layer_3_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.w_0
params/encoder_layer_11_multi_head_att_query_fc.b_0
params/encoder_layer_1_ffn_fc_0.w_0
params/encoder_layer_8_multi_head_att_value_fc.w_0
params/word_embedding
params/mask_lm_trans_layer_norm_bias
params/encoder_layer_8_multi_head_att_query_fc.w_0
params/encoder_layer_1_multi_head_att_query_fc.b_0
params/encoder_layer_5_ffn_fc_0.b_0
params/encoder_layer_3_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_1.b_0
params/encoder_layer_2_post_att_layer_norm_bias
params/encoder_layer_8_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.b_0
params/encoder_layer_11_post_ffn_layer_norm_bias
params/encoder_layer_6_multi_head_att_key_fc.b_0
params/mask_lm_trans_layer_norm_scale
params/encoder_layer_11_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_ffn_layer_norm_scale
params/encoder_layer_0_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_key_fc.b_0
params/encoder_layer_9_post_att_layer_norm_scale
params/encoder_layer_7_post_ffn_layer_norm_scale
params/encoder_layer_4_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_value_fc.w_0
params/pos_embedding
params/mask_lm_trans_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.b_0
params/encoder_layer_4_multi_head_att_query_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.w_0
params/encoder_layer_3_ffn_fc_1.w_0
params/encoder_layer_9_post_att_layer_norm_bias
params/accuracy_0.tmp_0
params/encoder_layer_3_post_att_layer_norm_bias
params/encoder_layer_7_multi_head_att_output_fc.b_0
params/encoder_layer_7_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.b_0
params/encoder_layer_0_multi_head_att_key_fc.w_0
params/encoder_layer_6_ffn_fc_0.w_0
params/encoder_layer_5_multi_head_att_query_fc.w_0
params/encoder_layer_10_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.w_0
params/encoder_layer_6_multi_head_att_key_fc.w_0
params/encoder_layer_9_ffn_fc_1.w_0
params/encoder_layer_10_ffn_fc_0.w_0
params/pre_encoder_layer_norm_bias
params/encoder_layer_1_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_scale
params/encoder_layer_9_post_ffn_layer_norm_scale
params/encoder_layer_9_multi_head_att_query_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.b_0
params/tmp_51
params/encoder_layer_11_ffn_fc_1.w_0
params/encoder_layer_7_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_key_fc.w_0
params/encoder_layer_8_multi_head_att_key_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.b_0
params/encoder_layer_6_post_att_layer_norm_scale
params/encoder_layer_5_ffn_fc_0.w_0
params/encoder_layer_4_multi_head_att_query_fc.b_0
params/encoder_layer_10_post_att_layer_norm_bias
params/encoder_layer_3_post_att_layer_norm_scale
params/encoder_layer_6_ffn_fc_1.w_0
params/mask_lm_out_fc.b_0
params/encoder_layer_3_ffn_fc_0.w_0
params/encoder_layer_6_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_bias
params/encoder_layer_6_multi_head_att_query_fc.b_0
params/encoder_layer_3_ffn_fc_0.b_0
params/encoder_layer_2_post_att_layer_norm_scale
params/encoder_layer_7_ffn_fc_0.w_0
params/encoder_layer_8_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_value_fc.b_0
params/encoder_layer_3_multi_head_att_output_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.w_0
params/encoder_layer_4_ffn_fc_1.w_0
params/encoder_layer_5_post_att_layer_norm_scale
params/encoder_layer_3_post_ffn_layer_norm_bias
params/encoder_layer_2_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_key_fc.b_0
params/encoder_layer_0_ffn_fc_1.w_0
params/encoder_layer_0_post_ffn_layer_norm_bias
params/encoder_layer_11_ffn_fc_0.b_0
params/pooled_fc.b_0
params/encoder_layer_2_multi_head_att_output_fc.b_0
params/encoder_layer_8_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_output_fc.w_0
params/encoder_layer_1_ffn_fc_1.w_0
params/encoder_layer_2_ffn_fc_0.b_0
params/encoder_layer_5_multi_head_att_output_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.w_0
params/encoder_layer_0_ffn_fc_1.b_0
params/encoder_layer_7_multi_head_att_key_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.b_0
params/encoder_layer_6_post_ffn_layer_norm_scale
params/encoder_layer_2_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_0.b_0
params/encoder_layer_11_ffn_fc_0.w_0
params/encoder_layer_1_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_key_fc.w_0
params/reduce_mean_0.tmp_0
params/encoder_layer_7_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_value_fc.b_0
params/@LR_DECAY_COUNTER@
params/encoder_layer_8_multi_head_att_key_fc.b_0
params/encoder_layer_4_post_ffn_layer_norm_scale
params/encoder_layer_10_post_ffn_layer_norm_bias
params/encoder_layer_9_ffn_fc_1.b_0
params/encoder_layer_3_multi_head_att_value_fc.b_0
params/encoder_layer_6_multi_head_att_value_fc.w_0
params/encoder_layer_8_multi_head_att_query_fc.b_0
params/encoder_layer_8_ffn_fc_1.b_0
params/encoder_layer_4_post_att_layer_norm_bias
params/encoder_layer_0_post_att_layer_norm_scale
params/encoder_layer_0_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_output_fc.b_0
params/encoder_layer_4_multi_head_att_output_fc.b_0
params/encoder_layer_8_ffn_fc_0.b_0
params/pre_encoder_layer_norm_scale
params/encoder_layer_11_ffn_fc_1.b_0
params/encoder_layer_8_multi_head_att_output_fc.b_0
params/encoder_layer_10_multi_head_att_query_fc.b_0
params/encoder_layer_1_multi_head_att_key_fc.b_0
params/encoder_layer_6_multi_head_att_output_fc.b_0
params/mask_lm_trans_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.b_0
params/encoder_layer_7_multi_head_att_value_fc.b_0
params/encoder_layer_10_multi_head_att_key_fc.b_0
params/encoder_layer_8_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.w_0
params/pooled_fc.w_0
params/encoder_layer_3_multi_head_att_value_fc.w_0
params/encoder_layer_0_multi_head_att_key_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.b_0
params/next_sent_3cls_fc.b_0
params/encoder_layer_2_ffn_fc_0.w_0
params/encoder_layer_1_multi_head_att_value_fc.w_0
params/encoder_layer_7_multi_head_att_query_fc.w_0
params/encoder_layer_3_post_ffn_layer_norm_scale
params/encoder_layer_1_post_ffn_layer_norm_scale
params/encoder_layer_6_post_att_layer_norm_bias
params/encoder_layer_4_multi_head_att_output_fc.w_0
params/encoder_layer_6_multi_head_att_output_fc.w_0
params/encoder_layer_7_multi_head_att_output_fc.w_0
params/encoder_layer_10_ffn_fc_1.b_0
params/encoder_layer_11_post_att_layer_norm_scale
params/encoder_layer_4_post_att_layer_norm_scale
params/encoder_layer_5_multi_head_att_query_fc.b_0
params/encoder_layer_4_multi_head_att_key_fc.b_0
params/encoder_layer_4_ffn_fc_1.b_0
params/encoder_layer_0_ffn_fc_0.w_0
params/encoder_layer_7_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_att_layer_norm_bias
params/encoder_layer_9_ffn_fc_0.b_0
params/encoder_layer_1_multi_head_att_value_fc.b_0
params/encoder_layer_10_post_ffn_layer_norm_scale
params/encoder_layer_2_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_bias
params/encoder_layer_10_ffn_fc_1.w_0
params/encoder_layer_0_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_query_fc.b_0
params/encoder_layer_8_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_scale
vocab.txt
ernie_config.json
In[3]
# 基於ERNIE模型finetune訓練
!cd /home/aistudio/work/ && sh run_ernie.sh train
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: /home/aistudio/data/data12605/data/dev.txt
do_infer: False
do_lower_case: True
do_train: True
do_val: True
epoch: 3
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./pretrain_models/ernie/params
label_map_config: None
lr: 2e-05
max_seq_len: 64
num_labels: 3
random_seed: 1
save_checkpoint_dir: ./save_models/ernie
save_steps: 500
skip_steps: 50
task_name: None
test_set: None
train_set: /home/aistudio/data/data12605/data/train.txt
use_cuda: True
use_paddle_hub: False
validation_steps: 50
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
Device count: 1
Num train examples: 9655
Max train steps: 906
Traceback (most recent call last):
  File "run_ernie_classifier.py", line 433, in <module>
    main(args)
  File "run_ernie_classifier.py", line 247, in main
    pyreader_name='train_reader')
  File "/home/aistudio/work/ernie_code/ernie.py", line 32, in ernie_pyreader
    src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
AttributeError: module 'paddle.fluid' has no attribute 'data'
In[19]
# 模型評估
!cd /home/aistudio/work/ && sh run_ernie.sh eval
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: None
do_infer: False
do_lower_case: True
do_train: False
do_val: True
epoch: 10
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./save_models/ernie/step_907
label_map_config: None
lr: 0.002
max_seq_len: 64
num_labels: 3
random_seed: 0
save_checkpoint_dir: checkpoints
save_steps: 10000
skip_steps: 10
task_name: None
test_set: /home/aistudio/data/data12605/data/test.txt
train_set: None
use_cuda: True
use_paddle_hub: False
validation_steps: 1000
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
W1026 03:28:54.923435   539 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:28:54.927536   539 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./save_models/ernie/step_907
Final validation result:
[test evaluation] accuracy: 0.908390, macro precision: 0.840080, recall: 0.875447, f1: 0.856447, elapsed time: 1.859051 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277
ERNIE-finetune 0.9054 0.8424 0.8588 0.8491

能夠看出,基於ERNIE模型Finetune後能取得更大的提高。

 

輔助內容

如下是相關的輔助內容,幫助你進一步瞭解該項目的細節。

本項目主要代碼結構及說明:

.
├── config.json             # 配置文件
├── config.py               # 配置文件讀取接口
├── inference_model.py	    # 保存 inference_model 的腳本,可用於線上部署
├── nets.py                 # 各類神經網絡結構
├── reader.py               # 數據讀取接口
├── run_classifier.py       # 項目的主程序入口,包括訓練、預測、評估
├── run.sh                  # 訓練、預測、評估運行腳本
├── tokenizer/              # 分詞工具
├── utils.py                # 其它功能函數腳本

能夠經過如下命令,查看全部參數的說明,若想查看以上操做過程的參數狀況,能夠將run_classifier.py第418行註釋的代碼恢復(刪掉#號),而後重頭運行復習一遍。

In[20]
# 查看全部參數及說明
!cd /home/aistudio/work/ && python run_classifier.py -h
usage: run_classifier.py [-h] [--do_train DO_TRAIN] [--do_val DO_VAL]
                         [--do_infer DO_INFER]
                         [--do_save_inference_model DO_SAVE_INFERENCE_MODEL]
                         [--model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}]
                         [--num_labels NUM_LABELS]
                         [--init_checkpoint INIT_CHECKPOINT]
                         [--save_checkpoint_dir SAVE_CHECKPOINT_DIR]
                         [--inference_model_dir INFERENCE_MODEL_DIR]
                         [--data_dir DATA_DIR] [--vocab_path VOCAB_PATH]
                         [--vocab_size VOCAB_SIZE] [--lr LR] [--epoch EPOCH]
                         [--use_cuda USE_CUDA] [--batch_size BATCH_SIZE]
                         [--skip_steps SKIP_STEPS] [--save_steps SAVE_STEPS]
                         [--validation_steps VALIDATION_STEPS]
                         [--random_seed RANDOM_SEED] [--verbose VERBOSE]
                         [--task_name TASK_NAME] [--enable_ce ENABLE_CE]

optional arguments:
  -h, --help            show this help message and exit

Running type options:

  --do_train DO_TRAIN   Whether to perform training. Default: False.
  --do_val DO_VAL       Whether to perform evaluation. Default: False.
  --do_infer DO_INFER   Whether to perform inference. Default: False.
  --do_save_inference_model DO_SAVE_INFERENCE_MODEL
                        Whether to perform save inference model. Default:
                        False.

Model config options:

  --model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}
                        Model type to run the task. Default: textcnn_net.
  --num_labels NUM_LABELS
                        Number of labels for classification Default: 3.
  --init_checkpoint INIT_CHECKPOINT
                        Init checkpoint to resume training from. Default:
                        ./textcnn.
  --save_checkpoint_dir SAVE_CHECKPOINT_DIR
                        Directory path to save checkpoints Default: .
  --inference_model_dir INFERENCE_MODEL_DIR
                        Directory path to save inference model Default:
                        ./inference_model.

Data config options:

  --data_dir DATA_DIR   Directory path to training data. Default:
                        /home/aistudio/data/data12605/data.
  --vocab_path VOCAB_PATH
                        Vocabulary path. Default:
                        /home/aistudio/data/data12605/data/vocab.txt.
  --vocab_size VOCAB_SIZE
                        Vocabulary size. Default: 240465.

Training config options:

  --lr LR               The Learning rate value for training. Default: 0.0001.
  --epoch EPOCH         Number of epoches for training. Default: 10.
  --use_cuda USE_CUDA   If set, use GPU for training. Default: False.
  --batch_size BATCH_SIZE
                        Total examples' number in batch for training. Default:
                        64.
  --skip_steps SKIP_STEPS
                        The steps interval to print loss. Default: 10.
  --save_steps SAVE_STEPS
                        The steps interval to save checkpoints. Default: 1000.
  --validation_steps VALIDATION_STEPS
                        The steps interval to evaluate model performance.
                        Default: 1000.
  --random_seed RANDOM_SEED
                        Random seed. Default: 0.

Logging options:

  --verbose VERBOSE     Whether to output verbose log Default: False.
  --task_name TASK_NAME
                        The name of task to perform emotion detection Default:
                        emotion_detection.
  --enable_ce ENABLE_CE
                        If set, run the task with continuous evaluation logs.
                        Default: False.

Customize options:
 

分詞預處理,若是須要對query數據進行分詞,可使用tokenizer工具,具體執行命令以下

對話情緒識別

對話情緒識別,目標是識別智能對話場景中用戶的情緒,幫助企業更全面的把握產品體驗、監控客戶服務質量,適用於聊天、客服等多種場景。

例如在智能音箱、智能車載等場景中,識別用戶的情緒,能夠適當地進行情緒安撫,改善產品的用戶交互體驗,在智能客服場景中,能夠分析客服服務質量、下降人工質檢成本,也可以幫助企業更好地把握對話質量、提升用戶滿意度。可經過百度AI開發平臺體驗。

從上圖能夠看到,對於用戶的對話文本(一般是語音識別後的文本),模型會判斷該文本屬於不一樣情緒類別的機率,並給出最後的情緒類別,在本案例中,對話情緒類別有三種:負向情緒(0)、中性情緒(1)和正向情緒(2),屬於短文本三分類問題。

咱們先來跑一下例子,直觀感覺一下模型的輸出結果!

In[1]
# 首先解壓數據集和預訓練的模型
!cd /home/aistudio/data/data12605/ && unzip -qo data.zip
!cd /home/aistudio/work/ && tar -zxf emotion_detection_textcnn-1.0.0.tar.gz

# 查看預測的數據
!cat /home/aistudio/data/data12605/data/infer.txt
靠 你 真是 說 廢話 
服務 態度 好 差 啊	
你 寫 過 黃山 奇石 嗎
一個一個 慢慢來
謝謝 服務 很 好
In[2]
# 修改配置,使用預訓練好的模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./textcnn' run.sh 
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json

# 模型預測,並查看結果
!cd /home/aistudio/work/ && sh run.sh infer
Load model from ./textcnn
Final infer result:
0	0.992887	0.003744	0.003369
0	0.677892	0.229147	0.092961
1	0.001657	0.997380	0.000963
1	0.003413	0.990708	0.005880
2	0.014995	0.104129	0.880875
[infer] elapsed time: 0.014017 s
 

訓練實踐

介紹分類的經常使用評價指標,如何準備數據,定義分類模型,而後快速進行對話情緒識別模型的訓練、評估和預測。

評價指標

分類模型的評價指標一般有 Accuracy、Precision、Recall 和 F1。

  • 準確率 Accuracy = 正確分類的樣本數 / 總樣本數。
  • 精確率 Precision = 預測爲正類而且正確的樣本數 / 預測爲正類的樣本數。
  • 召回率 Recall = 預測爲正類而且正確的樣本數 / 標註爲正類的樣本數。
  • 綜合評價指標 F1:2(Precision + Recall) / (Precision*Recall),Precision 和 Recall 加權調和平均
  • 以上指標越高,則說明模型比較理想。

以二分類問題爲例,一般以關注的類爲正類,其它類爲負類,一般會有一份測試集,模型在測試集上預測的結果有4種狀況,4種狀況就造成以下的混淆矩陣:

所以準確率 Accuracy 定義爲:acc=TP+TNTP+FN+FP+TNacc = \frac {TP+TN} {TP+FN+FP+TN}acc=TP+FN+FP+TNTP+TN

精確率 Precision 定義爲:P=TPTP+FPP = \frac {TP} {TP+FP}P=TP+FPTP

召回率 Recall 定義爲:R=TPTP+FNR = \frac {TP} {TP+FN}R=TP+FNTP

綜合評價指標 F1 定義爲:F1=2(P∗R)P+RF1 = \frac {2(P*R)} {P+R}F1=P+R2(PR)

在多分類狀況下,則用宏平均(Macro-averaging)和微平均(Micro-averaging)的方法,宏平均是指先計算每一類的各項評估指標,而後再對指標求算術平均值;微平均是指先對混淆矩陣的元素進行平均,獲得TP,FP,TN,FN的平均值,而後再計算各項評估指標。

在本案例中,咱們主要使用宏平均的計算方法。

Tips:在樣本不均衡的狀況下,準確率 Accuracy 這個評價指標有很大的缺陷。好比說1萬封郵件裏有10封垃圾郵件(千分之一的機率是垃圾郵件),若是模型將全部郵件判爲非垃圾郵件,那acc有99%以上,但實際上該模型是沒意義的。這種狀況下就須要使用Precision、Recall、F1做爲評價指標。

 

數據準備

爲了訓練分類模型,通常須要準備三個數據集:訓練集train.txt、驗證集dev.txt、測試集test.txt。

  • 訓練集,用來訓練模型參數的數據集,模型直接根據訓練集來調整自身參數以得到更好的分類效果。
  • 驗證集,又稱開發集,用於在訓練過程當中檢驗模型的狀態,收斂狀況。驗證集一般用於調整超參數,根據幾組模型驗證集上的表現決定哪組超參數擁有最好的性能。
  • 測試集,用來計算模型的各項評估指標,驗證模型泛化能力。

Tips:測試集的數據通常不在訓練集中,從而用來驗證模型的效果。

這裏咱們提供一份已標註的、通過分詞預處理的機器人聊天數據集,其目錄結構以下

.
├── train.txt   # 訓練集
├── dev.txt     # 驗證集
├── test.txt    # 測試集
├── infer.txt   # 待預測數據
├── vocab.txt   # 詞典

數據由兩列組成,以製表符('\t')分隔,第一列是情緒分類的類別(0表示負向情緒;1表示中性情緒;2表示正向情緒),第二列是以空格分詞的中文文本,以下示例,文件爲 utf8 編碼。

label   text_a
0   誰 罵人 了 ? 我 歷來 不 罵人 , 我 罵 的 都 不是 人 , 你 是 人 嗎 ?
1   我 有事 等會兒 就 回來 和 你 聊
2   我 見到 你 很 高興 謝謝 你 幫 我
 

分類模型選擇

傳統的機器學習分類方法,須要設置不少人工特徵,例如單詞的個數、文本的長度、單詞的詞性等等,而隨着深度學習的發展,不少分類模型的效果獲得驗證和使用,包括BOW、CNN、RNN、BiLSTM等,其特色是不用設計人工特徵,而是基於詞向量(word embedding)進行表示學習。

這裏咱們以 CNN 模型爲例,介紹如何使用 PaddlePaddle 定義網絡結構,更多的模型介紹細節在概念解釋章節。

網絡的配置以下,其中網絡的輸入dict_dim表示的是詞典的大小,class_dim表示類別數,這裏咱們是3。

In[3]
import paddle
import paddle.fluid as fluid

# 定義cnn模型
# 其中class_dim表示分類的類別數,win_sizes表示使用卷積核窗口大小
def cnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_size=3, is_prediction=False):
    """ Conv net """
    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    conv_3 = fluid.nets.sequence_conv_pool(
        input=emb,
        num_filters=hid_dim,
        filter_size=win_size,
        act="tanh",
        pool_type="max")

    # full connect layer
    fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction
    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)

    return avg_cost, prediction
 

定義網絡結構後,須要定義訓練和預測程序、優化函數、數據提供器等,爲了便於學習,咱們將模型訓練、評估、預測的過程封裝成 run.sh 腳本。

模型訓練

基於示例的數據集,能夠運行下面的命令,在訓練集(train.txt)上進行模型訓練,並在驗證集(dev.txt)驗證。

In[6]
# 修改配置,選擇cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json

# 修改訓練後模型保存的路徑
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/cnn' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:20:44.030146   271 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:20:44.033725   271 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.350119, avg acc: 0.875000, speed: 172.452770 steps/s
[dev evaluation] avg loss: 0.319571, avg acc: 0.874074, elapsed time: 0.048704 s
step: 400, avg loss: 0.296501, avg acc: 0.890625, speed: 65.212972 steps/s
[dev evaluation] avg loss: 0.230635, avg acc: 0.914815, elapsed time: 0.073480 s
step: 600, avg loss: 0.319913, avg acc: 0.875000, speed: 63.960171 steps/s
[dev evaluation] avg loss: 0.176513, avg acc: 0.938889, elapsed time: 0.054020 s
step: 756, avg loss: 0.168574, avg acc: 0.947368, speed: 70.363845 steps/s
[dev evaluation] avg loss: 0.144825, avg acc: 0.948148, elapsed time: 0.056827 s
 

訓練完成後,會在./save_models/cnn 目錄下生成以 step_xxx 命名的模型目錄。

模型評估

利用訓練後的模型step_756,能夠運行下面的命令進行測試,查看預訓練的模型在測試集(test.txt)上的評測結果

In[7]
# 確保使用的模型爲CNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
# 使用剛纔訓練的cnn模型
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/cnn/step_756
Final test result:
[test evaluation] accuracy: 0.866924, macro precision: 0.790397, recall: 0.714859, f1: 0.743252, elapsed time: 0.048996 s
 

模型預測

利用已有模型,可在未知label的數據集(infer.txt)上進行預測,獲得模型預測結果及各label的機率。

In[8]
# 查看預測的數據
!cat /home/aistudio/data/data12605/data/infer.txt

# 使用剛纔訓練的cnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"cnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/cnn/step_756' run.sh 

# 模型預測
!cd /home/aistudio/work/ && sh run.sh infer
靠 你 真是 說 廢話 
服務 態度 好 差 啊	
你 寫 過 黃山 奇石 嗎
一個一個 慢慢來
謝謝 服務 很 好 
Load model from ./save_models/cnn/step_756
Final infer result:
0	0.969470	0.000457	0.030072
0	0.434887	0.183004	0.382110
1	0.000057	0.999915	0.000028
1	0.000312	0.999080	0.000607
2	0.164429	0.002141	0.833429
[infer] elapsed time: 0.009522 s
 

概念解釋

CNN-卷積神經網絡

卷積神經網絡(Convolution Neural Network, CNN)最先使用於圖像領域,一般有多個卷積層+池化層組成,最後再拼接全鏈接層作分類。卷積層主要是執行卷積操做提取圖片底層到高層的特徵,池化層主要是執行降採樣操做,能夠過濾掉一些不重要的高頻信息。(降採樣是圖像處理中常見的一種操做)

什麼是卷積

綠色表示輸入的圖像,能夠是一張黑白圖片,0是黑色像素點,1是白色像素點。黃色就卷積核(kernal),也叫過濾器(filter)或特徵檢測器(feature detector),經過卷積,對圖片的像素點進行加權,做爲這局部像素點的響應,得到圖像的某種特徵。

卷積的過程,就是滑動這個黃色的矩陣,以必定的步長向右和向下移動,從而獲得整個圖像的特徵表示。

舉個例子:上圖中輸入的綠色矩陣表示一張人臉,黃色矩陣表示一個眼睛,卷積過程就是拿這個眼睛去匹配這張人臉,那麼當黃色矩陣匹配到綠色矩陣(人臉)中眼睛部分時,對應的響應就會很大,獲得的值就越大。

什麼是池化

前面卷積的過程,實際上"重疊"計算了不少冗餘的信息,池化就是對卷積後的特徵進行篩選,提取關鍵信息,過濾掉一些噪音,一般用的是max pooling和mean pooling。

文本卷積神經網絡

這裏介紹的主要是文本卷積神經網絡,首先咱們將輸入的query表示層詞向量序列,而後使用卷積去處理輸入的詞向量序列,就會產生一個特徵圖(feature map),對特徵圖採用時間維度上的最大池化(max pooling over time)操做,就獲得此卷積覈對應的整句話的特徵,最後,將全部卷積核獲得的特徵拼接起來即爲文本的定長向量表示,對於文本分類問題,將其鏈接至softmax即構建出完整的模型。

在實際應用中,咱們會使用多個卷積核來處理句子,窗口大小相同的卷積核堆疊起來造成一個矩陣,這樣能夠更高效的完成運算。另外,咱們也可以使用窗口大小不一樣的卷積核來處理句子,如上圖,不一樣顏色表示不一樣大小的卷積核操做。

 

進階使用

模型優化上,咱們通常會使用表達能力更強的模型,或者使用finetune。這裏首先咱們使用窗口大小不一樣的卷積核TextCNN模型進行實驗,而後介紹如何基於預訓練的模型進行finetune

TextCNN模型實驗

BOW詞袋模型,會忽略其詞順序、語法和句法,存在必定的缺陷,因此對於通常的短文本分類問題,常使用上文所述的文本卷積網絡,它在考慮詞順序的基礎上把文本映射到低維度的語義空間,而且以端對端(end to end)的方式進行文本表示及分類,其性能相對於傳統方法有顯著的提高。

In[9]
# 定義textcnn模型
# 其中class_dim表示分類的類別數,win_sizes表示使用卷積核窗口大小
def textcnn_net(data, label, dict_dim, emb_dim=128, hid_dim=128, hid_dim2=96, class_dim=3, win_sizes=None, is_prediction=False):
    """ Textcnn_net """
    if win_sizes is None:
        win_sizes = [1, 2, 3]

    # embedding layer
    emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])

    # convolution layer
    convs = []
    for win_size in win_sizes:
        conv_h = fluid.nets.sequence_conv_pool(
            input=emb,
            num_filters=hid_dim,
            filter_size=win_size,
            act="tanh",
            pool_type="max")
        convs.append(conv_h)
    convs_out = fluid.layers.concat(input=convs, axis=1)

    # full connect layer
    fc_1 = fluid.layers.fc(input=[convs_out], size=hid_dim2, act="tanh")
    # softmax layer
    prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
    if is_prediction:
        return prediction

    cost = fluid.layers.cross_entropy(input=prediction, label=label)
    avg_cost = fluid.layers.mean(x=cost)
    acc = fluid.layers.accuracy(input=prediction, label=label)
    return avg_cost, prediction
 

這裏咱們進行配置的修改,包括模型的類型、初始化模型位置、模型保存路徑,而後進行模型的訓練和評估

In[10]
# 更改模型爲TextCNN
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"",#' config.json
# 修改模型保存目錄
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:21:31.529520   339 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:21:31.533326   339 device_context.cc:267] device: 0, cuDNN Version: 7.3.
step: 200, avg loss: 0.212591, avg acc: 0.921875, speed: 104.244460 steps/s
[dev evaluation] avg loss: 0.284517, avg acc: 0.897222, elapsed time: 0.069697 s
step: 400, avg loss: 0.367220, avg acc: 0.812500, speed: 53.107965 steps/s
[dev evaluation] avg loss: 0.195091, avg acc: 0.932407, elapsed time: 0.080681 s
step: 600, avg loss: 0.242331, avg acc: 0.921875, speed: 52.311775 steps/s
[dev evaluation] avg loss: 0.139668, avg acc: 0.955556, elapsed time: 0.082921 s
step: 756, avg loss: 0.052051, avg acc: 1.000000, speed: 58.723846 steps/s
[dev evaluation] avg loss: 0.111066, avg acc: 0.962963, elapsed time: 0.082778 s
In[11]
# 使用上面訓練好的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn/step_756' run.sh

# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn/step_756
Final test result:
[test evaluation] accuracy: 0.878496, macro precision: 0.797653, recall: 0.754163, f1: 0.772353, elapsed time: 0.082577 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
 

基於預訓練的TextCNN進行Finetune

能夠經過修改run.sh中的init_checkpoint參數,加載預訓練模型來進行精調(finetune)。

In[13]
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
# 使用預訓練的textcnn模型
!cd /home/aistudio/work/ && sed -i 's#"init_checkpoint":.*$#"init_checkpoint":"./textcnn",#' config.json
# 修改學習率和保存的模型目錄
!cd /home/aistudio/work/ && sed -i 's#"lr":.*$#"lr":0.0001,#' config.json
!cd /home/aistudio/work/ && sed -i '6c CKPT_PATH=./save_models/textcnn_finetune' run.sh

# 模型訓練
!cd /home/aistudio/work/ && sh run.sh train
Num train examples: 9655
Max train steps: 756
W1026 03:23:05.350819   418 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:23:05.354846   418 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./textcnn
step: 200, avg loss: 0.184450, avg acc: 0.953125, speed: 103.065345 steps/s
[dev evaluation] avg loss: 0.170050, avg acc: 0.937037, elapsed time: 0.074731 s
step: 400, avg loss: 0.166738, avg acc: 0.921875, speed: 47.727028 steps/s
[dev evaluation] avg loss: 0.132444, avg acc: 0.954630, elapsed time: 0.081669 s
step: 600, avg loss: 0.076735, avg acc: 0.984375, speed: 53.387034 steps/s
[dev evaluation] avg loss: 0.103549, avg acc: 0.963889, elapsed time: 0.081754 s
step: 756, avg loss: 0.061593, avg acc: 0.947368, speed: 57.990719 steps/s
[dev evaluation] avg loss: 0.086959, avg acc: 0.971296, elapsed time: 0.080616 s
In[14]
# 修改配置,使用上面訓練獲得的模型
!cd /home/aistudio/work/ && sed -i 's#"model_type":.*$#"model_type":"textcnn_net",#' config.json
!cd /home/aistudio/work/ && sed -i '7c MODEL_PATH=./save_models/textcnn_finetune/step_756' run.sh
# 模型評估
!cd /home/aistudio/work/ && sh run.sh eval
Load model from ./save_models/textcnn_finetune/step_756
Final test result:
[test evaluation] accuracy: 0.893925, macro precision: 0.829668, recall: 0.812613, f1: 0.820883, elapsed time: 0.083944 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277

能夠看出,基於預訓練模型Finetune後能取得更好的分類效果。

 

基於ERNIE模型進行Finetune

這裏咱們先下載ERNIE預訓練模型,而後運行run_ernie.sh腳本中,加載ERNIE模型來進行精調(finetune)。

In[3]
!cd /home/aistudio/work/ && mkdir -p pretrain_models/ernie
%cd /home/aistudio/work/pretrain_models/ernie
# 獲取ernie預訓練模型
!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -O ERNIE_stable-1.0.1.tar.gz
!tar -zxvf ERNIE_stable-1.0.1.tar.gz && rm ERNIE_stable-1.0.1.tar.gz
/home/aistudio/work/pretrain_models/ernie
--2020-02-27 20:17:41--  https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz
Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 182.61.200.195, 182.61.200.229
Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|182.61.200.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 374178867 (357M) [application/x-gzip]
Saving to: ‘ERNIE_stable-1.0.1.tar.gz’

ERNIE_stable-1.0.1. 100%[===================>] 356.84M  61.3MB/s    in 8.9s    

2020-02-27 20:17:50 (40.1 MB/s) - ‘ERNIE_stable-1.0.1.tar.gz’ saved [374178867/374178867]

params/
params/encoder_layer_5_multi_head_att_key_fc.w_0
params/encoder_layer_0_post_ffn_layer_norm_scale
params/encoder_layer_0_post_att_layer_norm_bias
params/encoder_layer_0_multi_head_att_value_fc.w_0
params/sent_embedding
params/encoder_layer_11_multi_head_att_query_fc.w_0
params/encoder_layer_8_ffn_fc_0.w_0
params/encoder_layer_5_ffn_fc_1.w_0
params/encoder_layer_6_ffn_fc_1.b_0
params/encoder_layer_5_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_output_fc.b_0
params/encoder_layer_4_ffn_fc_0.w_0
params/encoder_layer_4_post_ffn_layer_norm_bias
params/encoder_layer_3_ffn_fc_1.b_0
params/encoder_layer_0_multi_head_att_value_fc.b_0
params/encoder_layer_11_post_att_layer_norm_bias
params/encoder_layer_3_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_output_fc.w_0
params/encoder_layer_5_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_value_fc.w_0
params/encoder_layer_6_multi_head_att_query_fc.w_0
params/encoder_layer_8_post_att_layer_norm_bias
params/encoder_layer_2_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_key_fc.w_0
params/encoder_layer_4_multi_head_att_key_fc.w_0
params/encoder_layer_6_post_ffn_layer_norm_bias
params/encoder_layer_9_post_ffn_layer_norm_bias
params/encoder_layer_11_post_ffn_layer_norm_scale
params/encoder_layer_6_multi_head_att_value_fc.b_0
params/encoder_layer_9_ffn_fc_0.w_0
params/encoder_layer_2_post_ffn_layer_norm_scale
params/encoder_layer_1_multi_head_att_query_fc.w_0
params/encoder_layer_1_post_ffn_layer_norm_bias
params/next_sent_3cls_fc.w_0
params/encoder_layer_9_multi_head_att_key_fc.w_0
params/encoder_layer_7_multi_head_att_value_fc.w_0
params/encoder_layer_10_ffn_fc_0.b_0
params/encoder_layer_2_multi_head_att_value_fc.w_0
params/encoder_layer_8_post_ffn_layer_norm_scale
params/encoder_layer_3_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.w_0
params/encoder_layer_11_multi_head_att_query_fc.b_0
params/encoder_layer_1_ffn_fc_0.w_0
params/encoder_layer_8_multi_head_att_value_fc.w_0
params/word_embedding
params/mask_lm_trans_layer_norm_bias
params/encoder_layer_8_multi_head_att_query_fc.w_0
params/encoder_layer_1_multi_head_att_query_fc.b_0
params/encoder_layer_5_ffn_fc_0.b_0
params/encoder_layer_3_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_1.b_0
params/encoder_layer_2_post_att_layer_norm_bias
params/encoder_layer_8_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.b_0
params/encoder_layer_11_post_ffn_layer_norm_bias
params/encoder_layer_6_multi_head_att_key_fc.b_0
params/mask_lm_trans_layer_norm_scale
params/encoder_layer_11_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_ffn_layer_norm_scale
params/encoder_layer_0_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_key_fc.b_0
params/encoder_layer_9_post_att_layer_norm_scale
params/encoder_layer_7_post_ffn_layer_norm_scale
params/encoder_layer_4_ffn_fc_0.b_0
params/encoder_layer_9_multi_head_att_value_fc.w_0
params/pos_embedding
params/mask_lm_trans_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.b_0
params/encoder_layer_4_multi_head_att_query_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.w_0
params/encoder_layer_3_ffn_fc_1.w_0
params/encoder_layer_9_post_att_layer_norm_bias
params/accuracy_0.tmp_0
params/encoder_layer_3_post_att_layer_norm_bias
params/encoder_layer_7_multi_head_att_output_fc.b_0
params/encoder_layer_7_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.b_0
params/encoder_layer_0_multi_head_att_key_fc.w_0
params/encoder_layer_6_ffn_fc_0.w_0
params/encoder_layer_5_multi_head_att_query_fc.w_0
params/encoder_layer_10_post_att_layer_norm_scale
params/encoder_layer_2_ffn_fc_1.w_0
params/encoder_layer_6_multi_head_att_key_fc.w_0
params/encoder_layer_9_ffn_fc_1.w_0
params/encoder_layer_10_ffn_fc_0.w_0
params/pre_encoder_layer_norm_bias
params/encoder_layer_1_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_scale
params/encoder_layer_9_post_ffn_layer_norm_scale
params/encoder_layer_9_multi_head_att_query_fc.w_0
params/encoder_layer_2_multi_head_att_query_fc.b_0
params/tmp_51
params/encoder_layer_11_ffn_fc_1.w_0
params/encoder_layer_7_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_key_fc.w_0
params/encoder_layer_8_multi_head_att_key_fc.w_0
params/encoder_layer_5_multi_head_att_value_fc.b_0
params/encoder_layer_6_post_att_layer_norm_scale
params/encoder_layer_5_ffn_fc_0.w_0
params/encoder_layer_4_multi_head_att_query_fc.b_0
params/encoder_layer_10_post_att_layer_norm_bias
params/encoder_layer_3_post_att_layer_norm_scale
params/encoder_layer_6_ffn_fc_1.w_0
params/mask_lm_out_fc.b_0
params/encoder_layer_3_ffn_fc_0.w_0
params/encoder_layer_6_ffn_fc_0.b_0
params/encoder_layer_1_post_att_layer_norm_bias
params/encoder_layer_6_multi_head_att_query_fc.b_0
params/encoder_layer_3_ffn_fc_0.b_0
params/encoder_layer_2_post_att_layer_norm_scale
params/encoder_layer_7_ffn_fc_0.w_0
params/encoder_layer_8_ffn_fc_1.w_0
params/encoder_layer_11_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_value_fc.b_0
params/encoder_layer_3_multi_head_att_output_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.w_0
params/encoder_layer_4_multi_head_att_value_fc.w_0
params/encoder_layer_4_ffn_fc_1.w_0
params/encoder_layer_5_post_att_layer_norm_scale
params/encoder_layer_3_post_ffn_layer_norm_bias
params/encoder_layer_2_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_key_fc.b_0
params/encoder_layer_0_ffn_fc_1.w_0
params/encoder_layer_0_post_ffn_layer_norm_bias
params/encoder_layer_11_ffn_fc_0.b_0
params/pooled_fc.b_0
params/encoder_layer_2_multi_head_att_output_fc.b_0
params/encoder_layer_8_multi_head_att_value_fc.b_0
params/encoder_layer_5_multi_head_att_output_fc.w_0
params/encoder_layer_1_ffn_fc_1.w_0
params/encoder_layer_2_ffn_fc_0.b_0
params/encoder_layer_5_multi_head_att_output_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.w_0
params/encoder_layer_0_ffn_fc_1.b_0
params/encoder_layer_7_multi_head_att_key_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.w_0
params/encoder_layer_1_multi_head_att_output_fc.b_0
params/encoder_layer_6_post_ffn_layer_norm_scale
params/encoder_layer_2_multi_head_att_key_fc.b_0
params/encoder_layer_7_ffn_fc_0.b_0
params/encoder_layer_11_ffn_fc_0.w_0
params/encoder_layer_1_ffn_fc_1.b_0
params/encoder_layer_10_multi_head_att_key_fc.w_0
params/reduce_mean_0.tmp_0
params/encoder_layer_7_post_ffn_layer_norm_bias
params/encoder_layer_10_multi_head_att_value_fc.b_0
params/@LR_DECAY_COUNTER@
params/encoder_layer_8_multi_head_att_key_fc.b_0
params/encoder_layer_4_post_ffn_layer_norm_scale
params/encoder_layer_10_post_ffn_layer_norm_bias
params/encoder_layer_9_ffn_fc_1.b_0
params/encoder_layer_3_multi_head_att_value_fc.b_0
params/encoder_layer_6_multi_head_att_value_fc.w_0
params/encoder_layer_8_multi_head_att_query_fc.b_0
params/encoder_layer_8_ffn_fc_1.b_0
params/encoder_layer_4_post_att_layer_norm_bias
params/encoder_layer_0_post_att_layer_norm_scale
params/encoder_layer_0_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_output_fc.b_0
params/encoder_layer_4_multi_head_att_output_fc.b_0
params/encoder_layer_8_ffn_fc_0.b_0
params/pre_encoder_layer_norm_scale
params/encoder_layer_11_ffn_fc_1.b_0
params/encoder_layer_8_multi_head_att_output_fc.b_0
params/encoder_layer_10_multi_head_att_query_fc.b_0
params/encoder_layer_1_multi_head_att_key_fc.b_0
params/encoder_layer_6_multi_head_att_output_fc.b_0
params/mask_lm_trans_fc.b_0
params/encoder_layer_9_multi_head_att_output_fc.b_0
params/encoder_layer_7_multi_head_att_value_fc.b_0
params/encoder_layer_10_multi_head_att_key_fc.b_0
params/encoder_layer_8_multi_head_att_output_fc.w_0
params/encoder_layer_2_multi_head_att_key_fc.w_0
params/encoder_layer_10_multi_head_att_query_fc.w_0
params/encoder_layer_0_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.w_0
params/pooled_fc.w_0
params/encoder_layer_3_multi_head_att_value_fc.w_0
params/encoder_layer_0_multi_head_att_key_fc.b_0
params/encoder_layer_3_multi_head_att_query_fc.b_0
params/encoder_layer_11_multi_head_att_value_fc.b_0
params/next_sent_3cls_fc.b_0
params/encoder_layer_2_ffn_fc_0.w_0
params/encoder_layer_1_multi_head_att_value_fc.w_0
params/encoder_layer_7_multi_head_att_query_fc.w_0
params/encoder_layer_3_post_ffn_layer_norm_scale
params/encoder_layer_1_post_ffn_layer_norm_scale
params/encoder_layer_6_post_att_layer_norm_bias
params/encoder_layer_4_multi_head_att_output_fc.w_0
params/encoder_layer_6_multi_head_att_output_fc.w_0
params/encoder_layer_7_multi_head_att_output_fc.w_0
params/encoder_layer_10_ffn_fc_1.b_0
params/encoder_layer_11_post_att_layer_norm_scale
params/encoder_layer_4_post_att_layer_norm_scale
params/encoder_layer_5_multi_head_att_query_fc.b_0
params/encoder_layer_4_multi_head_att_key_fc.b_0
params/encoder_layer_4_ffn_fc_1.b_0
params/encoder_layer_0_ffn_fc_0.w_0
params/encoder_layer_7_multi_head_att_key_fc.b_0
params/encoder_layer_5_post_att_layer_norm_bias
params/encoder_layer_9_ffn_fc_0.b_0
params/encoder_layer_1_multi_head_att_value_fc.b_0
params/encoder_layer_10_post_ffn_layer_norm_scale
params/encoder_layer_2_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_bias
params/encoder_layer_10_ffn_fc_1.w_0
params/encoder_layer_0_multi_head_att_output_fc.w_0
params/encoder_layer_9_multi_head_att_query_fc.b_0
params/encoder_layer_8_post_ffn_layer_norm_bias
params/encoder_layer_7_post_att_layer_norm_scale
vocab.txt
ernie_config.json
In[3]
# 基於ERNIE模型finetune訓練
!cd /home/aistudio/work/ && sh run_ernie.sh train
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: /home/aistudio/data/data12605/data/dev.txt
do_infer: False
do_lower_case: True
do_train: True
do_val: True
epoch: 3
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./pretrain_models/ernie/params
label_map_config: None
lr: 2e-05
max_seq_len: 64
num_labels: 3
random_seed: 1
save_checkpoint_dir: ./save_models/ernie
save_steps: 500
skip_steps: 50
task_name: None
test_set: None
train_set: /home/aistudio/data/data12605/data/train.txt
use_cuda: True
use_paddle_hub: False
validation_steps: 50
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
Device count: 1
Num train examples: 9655
Max train steps: 906
Traceback (most recent call last):
  File "run_ernie_classifier.py", line 433, in <module>
    main(args)
  File "run_ernie_classifier.py", line 247, in main
    pyreader_name='train_reader')
  File "/home/aistudio/work/ernie_code/ernie.py", line 32, in ernie_pyreader
    src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
AttributeError: module 'paddle.fluid' has no attribute 'data'
In[19]
# 模型評估
!cd /home/aistudio/work/ && sh run_ernie.sh eval
-----------  Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: None
do_infer: False
do_lower_case: True
do_train: False
do_val: True
epoch: 10
ernie_config_path: ./pretrain_models/ernie/ernie_config.json
infer_set: None
init_checkpoint: ./save_models/ernie/step_907
label_map_config: None
lr: 0.002
max_seq_len: 64
num_labels: 3
random_seed: 0
save_checkpoint_dir: checkpoints
save_steps: 10000
skip_steps: 10
task_name: None
test_set: /home/aistudio/data/data12605/data/test.txt
train_set: None
use_cuda: True
use_paddle_hub: False
validation_steps: 1000
verbose: True
vocab_path: ./pretrain_models/ernie/vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
W1026 03:28:54.923435   539 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1026 03:28:54.927536   539 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Load model from ./save_models/ernie/step_907
Final validation result:
[test evaluation] accuracy: 0.908390, macro precision: 0.840080, recall: 0.875447, f1: 0.856447, elapsed time: 1.859051 s
 

模型評估結果對比

模型/評估指標 Accuracy Precision Recall F1
CNN 0.8717 0.8110 0.7178 0.7484
TextCNN 0.8784 0.7970 0.7786 0.7873
TextCNN-finetune 0.8977 0.8315 0.8240 0.8277
ERNIE-finetune 0.9054 0.8424 0.8588 0.8491

能夠看出,基於ERNIE模型Finetune後能取得更大的提高。

 

輔助內容

如下是相關的輔助內容,幫助你進一步瞭解該項目的細節。

本項目主要代碼結構及說明:

.
├── config.json             # 配置文件
├── config.py               # 配置文件讀取接口
├── inference_model.py	    # 保存 inference_model 的腳本,可用於線上部署
├── nets.py                 # 各類神經網絡結構
├── reader.py               # 數據讀取接口
├── run_classifier.py       # 項目的主程序入口,包括訓練、預測、評估
├── run.sh                  # 訓練、預測、評估運行腳本
├── tokenizer/              # 分詞工具
├── utils.py                # 其它功能函數腳本

能夠經過如下命令,查看全部參數的說明,若想查看以上操做過程的參數狀況,能夠將run_classifier.py第418行註釋的代碼恢復(刪掉#號),而後重頭運行復習一遍。

In[20]
# 查看全部參數及說明
!cd /home/aistudio/work/ && python run_classifier.py -h
usage: run_classifier.py [-h] [--do_train DO_TRAIN] [--do_val DO_VAL]
                         [--do_infer DO_INFER]
                         [--do_save_inference_model DO_SAVE_INFERENCE_MODEL]
                         [--model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}]
                         [--num_labels NUM_LABELS]
                         [--init_checkpoint INIT_CHECKPOINT]
                         [--save_checkpoint_dir SAVE_CHECKPOINT_DIR]
                         [--inference_model_dir INFERENCE_MODEL_DIR]
                         [--data_dir DATA_DIR] [--vocab_path VOCAB_PATH]
                         [--vocab_size VOCAB_SIZE] [--lr LR] [--epoch EPOCH]
                         [--use_cuda USE_CUDA] [--batch_size BATCH_SIZE]
                         [--skip_steps SKIP_STEPS] [--save_steps SAVE_STEPS]
                         [--validation_steps VALIDATION_STEPS]
                         [--random_seed RANDOM_SEED] [--verbose VERBOSE]
                         [--task_name TASK_NAME] [--enable_ce ENABLE_CE]

optional arguments:
  -h, --help            show this help message and exit

Running type options:

  --do_train DO_TRAIN   Whether to perform training. Default: False.
  --do_val DO_VAL       Whether to perform evaluation. Default: False.
  --do_infer DO_INFER   Whether to perform inference. Default: False.
  --do_save_inference_model DO_SAVE_INFERENCE_MODEL
                        Whether to perform save inference model. Default:
                        False.

Model config options:

  --model_type {bow_net,cnn_net,lstm_net,bilstm_net,gru_net,textcnn_net}
                        Model type to run the task. Default: textcnn_net.
  --num_labels NUM_LABELS
                        Number of labels for classification Default: 3.
  --init_checkpoint INIT_CHECKPOINT
                        Init checkpoint to resume training from. Default:
                        ./textcnn.
  --save_checkpoint_dir SAVE_CHECKPOINT_DIR
                        Directory path to save checkpoints Default: .
  --inference_model_dir INFERENCE_MODEL_DIR
                        Directory path to save inference model Default:
                        ./inference_model.

Data config options:

  --data_dir DATA_DIR   Directory path to training data. Default:
                        /home/aistudio/data/data12605/data.
  --vocab_path VOCAB_PATH
                        Vocabulary path. Default:
                        /home/aistudio/data/data12605/data/vocab.txt.
  --vocab_size VOCAB_SIZE
                        Vocabulary size. Default: 240465.

Training config options:

  --lr LR               The Learning rate value for training. Default: 0.0001.
  --epoch EPOCH         Number of epoches for training. Default: 10.
  --use_cuda USE_CUDA   If set, use GPU for training. Default: False.
  --batch_size BATCH_SIZE
                        Total examples' number in batch for training. Default:
                        64.
  --skip_steps SKIP_STEPS
                        The steps interval to print loss. Default: 10.
  --save_steps SAVE_STEPS
                        The steps interval to save checkpoints. Default: 1000.
  --validation_steps VALIDATION_STEPS
                        The steps interval to evaluate model performance.
                        Default: 1000.
  --random_seed RANDOM_SEED
                        Random seed. Default: 0.

Logging options:

  --verbose VERBOSE     Whether to output verbose log Default: False.
  --task_name TASK_NAME
                        The name of task to perform emotion detection Default:
                        emotion_detection.
  --enable_ce ENABLE_CE
                        If set, run the task with continuous evaluation logs.
                        Default: False.

Customize options:
 

分詞預處理,若是須要對query數據進行分詞,可使用tokenizer工具,具體執行命令以下

In[21]
# 解壓分詞工具包,對測試數據進行分詞
!cd /home/aistudio/work/ && unzip -qo tokenizer.zip
!cd /home/aistudio/work/tokenizer && python tokenizer.py --test_data_dir test.txt.utf8 > new_query.txt

# 查看分詞結果
!cd /home/aistudio/work/tokenizer && cat new_query.txt
我 是 中國 人
百度 是 一家 人工智能 公司
國家博物館 將 閉關
巴薩 5 - 1 晉級 歐冠 八強
c羅 帽子戲法 , 尤文 實現 史詩級 逆轉
In[21]
# 解壓分詞工具包,對測試數據進行分詞
!cd /home/aistudio/work/ && unzip -qo tokenizer.zip
!cd /home/aistudio/work/tokenizer && python tokenizer.py --test_data_dir test.txt.utf8 > new_query.txt

# 查看分詞結果
!cd /home/aistudio/work/tokenizer && cat new_query.txt
我 是 中國 人
百度 是 一家 人工智能 公司
國家博物館 將 閉關
巴薩 5 - 1 晉級 歐冠 八強
c羅 帽子戲法 , 尤文 實現 史詩級 逆轉

點擊連接,使用AI Studio一鍵上手實踐項目吧:https://aistudio.baidu.com/aistudio/projectdetail/121630 

下載安裝命令

## CPU版本安裝命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安裝命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

>> 訪問 PaddlePaddle 官網,瞭解更多相關內容

相關文章
相關標籤/搜索