1 大綱概述html
文本分類這個系列將會有十篇左右,包括基於word2vec預訓練的文本分類,與及基於最新的預訓練模型(ELMo,BERT等)的文本分類。總共有如下系列:python
textCNN 模型github
charCNN 模型json
Bi-LSTM + Attention 模型session
RCNN 模型app
jupyter notebook代碼均在textClassifier倉庫中,python代碼在NLP-Project中的text_classfier中。
2 數據集
數據集爲IMDB 電影影評,總共有三個數據文件,在/data/rawData目錄下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在進行文本分類時須要有標籤的數據(labeledTrainData),數據預處理如文本分類實戰(一)—— word2vec預訓練詞向量中同樣,預處理後的文件爲/data/preprocess/labeledTrain.csv。
3 Transformer 模型結構
Transformer模型來自於論文Attention Is All You Need,關於Transformer具體的介紹見這篇。Transformer模型具體結構以下圖:
Transformer結構有兩種:Encoder和Decoder,在文本分類中只使用到了Encoder,Decoder是生成式模型,主要用於天然語言生成的。
4 參數配置
import os import csv import time import datetime import random import json import warnings from collections import Counter from math import sqrt import gensim import pandas as pd import numpy as np import tensorflow as tf from sklearn.metrics import roc_auc_score, accuracy_score, precision_score, recall_score warnings.filterwarnings("ignore")
# 配置參數 class TrainingConfig(object): epoches = 10 evaluateEvery = 100 checkpointEvery = 100 learningRate = 0.001 class ModelConfig(object): embeddingSize = 200 filters = 128 # 內層一維卷積核的數量,外層卷積核的數量應該等於embeddingSize,由於要確保每一個layer後的輸出維度和輸入維度是一致的。 numHeads = 8 # Attention 的頭數 numBlocks = 1 # 設置transformer block的數量 epsilon = 1e-8 # LayerNorm 層中的最小除數 keepProp = 0.9 # multi head attention 中的dropout dropoutKeepProb = 0.5 # 全鏈接層的dropout l2RegLambda = 0.0 class Config(object): sequenceLength = 200 # 取了全部序列長度的均值 batchSize = 128 dataSource = "../data/preProcess/labeledTrain.csv" stopWordSource = "../data/english" numClasses = 1 # 二分類設置爲1,多分類設置爲類別的數目 rate = 0.8 # 訓練集的比例 training = TrainingConfig() model = ModelConfig() # 實例化配置參數對象 config = Config()
5 生成訓練數據
1)將數據加載進來,將句子分割成詞表示,並去除低頻詞和停用詞。
2)將詞映射成索引表示,構建詞彙-索引映射表,並保存成json的數據格式,以後作inference時能夠用到。(注意,有的詞可能不在word2vec的預訓練詞向量中,這種詞直接用UNK表示)
3)從預訓練的詞向量模型中讀取出詞向量,做爲初始化值輸入到模型中。
4)將數據集分割成訓練集和測試集
# 數據預處理的類,生成訓練集和測試集 class Dataset(object): def __init__(self, config): self.config = config self._dataSource = config.dataSource self._stopWordSource = config.stopWordSource self._sequenceLength = config.sequenceLength # 每條輸入的序列處理爲定長 self._embeddingSize = config.model.embeddingSize self._batchSize = config.batchSize self._rate = config.rate self._stopWordDict = {} self.trainReviews = [] self.trainLabels = [] self.evalReviews = [] self.evalLabels = [] self.wordEmbedding =None self.labelList = [] def _readData(self, filePath): """ 從csv文件中讀取數據集 """ df = pd.read_csv(filePath) if self.config.numClasses == 1: labels = df["sentiment"].tolist() elif self.config.numClasses > 1: labels = df["rate"].tolist() review = df["review"].tolist() reviews = [line.strip().split() for line in review] return reviews, labels def _labelToIndex(self, labels, label2idx): """ 將標籤轉換成索引表示 """ labelIds = [label2idx[label] for label in labels] return labelIds def _wordToIndex(self, reviews, word2idx): """ 將詞轉換成索引 """ reviewIds = [[word2idx.get(item, word2idx["UNK"]) for item in review] for review in reviews] return reviewIds def _genTrainEvalData(self, x, y, word2idx, rate): """ 生成訓練集和驗證集 """ reviews = [] for review in x: if len(review) >= self._sequenceLength: reviews.append(review[:self._sequenceLength]) else: reviews.append(review + [word2idx["PAD"]] * (self._sequenceLength - len(review))) trainIndex = int(len(x) * rate) trainReviews = np.asarray(reviews[:trainIndex], dtype="int64") trainLabels = np.array(y[:trainIndex], dtype="float32") evalReviews = np.asarray(reviews[trainIndex:], dtype="int64") evalLabels = np.array(y[trainIndex:], dtype="float32") return trainReviews, trainLabels, evalReviews, evalLabels def _genVocabulary(self, reviews, labels): """ 生成詞向量和詞彙-索引映射字典,能夠用全數據集 """ allWords = [word for review in reviews for word in review] # 去掉停用詞 subWords = [word for word in allWords if word not in self.stopWordDict] wordCount = Counter(subWords) # 統計詞頻 sortWordCount = sorted(wordCount.items(), key=lambda x: x[1], reverse=True) # 去除低頻詞 words = [item[0] for item in sortWordCount if item[1] >= 5] vocab, wordEmbedding = self._getWordEmbedding(words) self.wordEmbedding = wordEmbedding word2idx = dict(zip(vocab, list(range(len(vocab))))) uniqueLabel = list(set(labels)) label2idx = dict(zip(uniqueLabel, list(range(len(uniqueLabel))))) self.labelList = list(range(len(uniqueLabel))) # 將詞彙-索引映射表保存爲json數據,以後作inference時直接加載來處理數據 with open("../data/wordJson/word2idx.json", "w", encoding="utf-8") as f: json.dump(word2idx, f) with open("../data/wordJson/label2idx.json", "w", encoding="utf-8") as f: json.dump(label2idx, f) return word2idx, label2idx def _getWordEmbedding(self, words): """ 按照咱們的數據集中的單詞取出預訓練好的word2vec中的詞向量 """ wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin", binary=True) vocab = [] wordEmbedding = [] # 添加 "pad" 和 "UNK", vocab.append("PAD") vocab.append("UNK") wordEmbedding.append(np.zeros(self._embeddingSize)) wordEmbedding.append(np.random.randn(self._embeddingSize)) for word in words: try: vector = wordVec.wv[word] vocab.append(word) wordEmbedding.append(vector) except: print(word + "不存在於詞向量中") return vocab, np.array(wordEmbedding) def _readStopWord(self, stopWordPath): """ 讀取停用詞 """ with open(stopWordPath, "r") as f: stopWords = f.read() stopWordList = stopWords.splitlines() # 將停用詞用列表的形式生成,以後查找停用詞時會比較快 self.stopWordDict = dict(zip(stopWordList, list(range(len(stopWordList))))) def dataGen(self): """ 初始化訓練集和驗證集 """ # 初始化停用詞 self._readStopWord(self._stopWordSource) # 初始化數據集 reviews, labels = self._readData(self._dataSource) # 初始化詞彙-索引映射表和詞向量矩陣 word2idx, label2idx = self._genVocabulary(reviews, labels) # 將標籤和句子數值化 labelIds = self._labelToIndex(labels, label2idx) reviewIds = self._wordToIndex(reviews, word2idx) # 初始化訓練集和測試集 trainReviews, trainLabels, evalReviews, evalLabels = self._genTrainEvalData(reviewIds, labelIds, word2idx, self._rate) self.trainReviews = trainReviews self.trainLabels = trainLabels self.evalReviews = evalReviews self.evalLabels = evalLabels data = Dataset(config) data.dataGen()
6 生成batch數據集
採用生成器的形式向模型輸入batch數據集,(生成器能夠避免將全部的數據加入到內存中)
# 輸出batch數據集 def nextBatch(x, y, batchSize): """ 生成batch數據集,用生成器的方式輸出 """ perm = np.arange(len(x)) np.random.shuffle(perm) x = x[perm] y = y[perm] numBatches = len(x) // batchSize for i in range(numBatches): start = i * batchSize end = start + batchSize batchX = np.array(x[start: end], dtype="int64") batchY = np.array(y[start: end], dtype="float32") yield batchX, batchY
7 Transformer模型
關於transformer模型的一些使用心得:
1)我在這裏選擇固定的one-hot的position embedding比論文中提出的利用正弦餘弦函數生成的position embedding的效果要好,可能的緣由是論文中提出的position embedding是做爲可訓練的值傳入的,
這樣就增長了模型的複雜度,在小數據集(IMDB訓練集大小:20000)上致使性能有所降低。
2)mask可能不須要,添加mask和去除mask對結果基本沒啥影響,也許在其餘的任務或者數據集上有做用,但論文也並無提出必定要在encoder結構中加入mask,mask更多的是用在decoder。
3)transformer的層數,transformer的層數能夠根據本身的數據集大小調整,在小數據集上基本上一層就夠了。
4)在subLayers上加dropout正則化,主要是在multi-head attention層加,由於feed forward是用卷積實現的,不加dropout應該不要緊,固然若是feed forward用全鏈接層實現,那也加上dropout。
5)在小數據集上transformer的效果並不必定比Bi-LSTM + Attention好,在IMDB上效果就更差。
# 生成位置嵌入 def fixedPositionEmbedding(batchSize, sequenceLen): embeddedPosition = [] for batch in range(batchSize): x = [] for step in range(sequenceLen): a = np.zeros(sequenceLen) a[step] = 1 x.append(a) embeddedPosition.append(x) return np.array(embeddedPosition, dtype="float32") # 模型構建 class Transformer(object): """ Transformer Encoder 用於文本分類 """ def __init__(self, config, wordEmbedding): # 定義模型的輸入 self.inputX = tf.placeholder(tf.int32, [None, config.sequenceLength], name="inputX") self.inputY = tf.placeholder(tf.int32, [None], name="inputY") self.dropoutKeepProb = tf.placeholder(tf.float32, name="dropoutKeepProb") self.embeddedPosition = tf.placeholder(tf.float32, [None, config.sequenceLength, config.sequenceLength], name="embeddedPosition") self.config = config # 定義l2損失 l2Loss = tf.constant(0.0) # 詞嵌入層, 位置向量的定義方式有兩種:一是直接用固定的one-hot的形式傳入,而後和詞向量拼接,在當前的數據集上表現效果更好。另外一種 # 就是按照論文中的方法實現,這樣的效果反而更差,多是增大了模型的複雜度,在小數據集上表現不佳。 with tf.name_scope("embedding"): # 利用預訓練的詞向量初始化詞嵌入矩陣 self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec") ,name="W") # 利用詞嵌入矩陣將輸入的數據中的詞轉換成詞向量,維度[batch_size, sequence_length, embedding_size] self.embedded = tf.nn.embedding_lookup(self.W, self.inputX) self.embeddedWords = tf.concat([self.embedded, self.embeddedPosition], -1) with tf.name_scope("transformer"): for i in range(config.model.numBlocks): with tf.name_scope("transformer-{}".format(i + 1)): # 維度[batch_size, sequence_length, embedding_size] multiHeadAtt = self._multiheadAttention(rawKeys=self.inputX, queries=self.embeddedWords, keys=self.embeddedWords) # 維度[batch_size, sequence_length, embedding_size] self.embeddedWords = self._feedForward(multiHeadAtt, [config.model.filters, config.model.embeddingSize + config.sequenceLength]) outputs = tf.reshape(self.embeddedWords, [-1, config.sequenceLength * (config.model.embeddingSize + config.sequenceLength)]) outputSize = outputs.get_shape()[-1].value # with tf.name_scope("wordEmbedding"): # self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec"), name="W") # self.wordEmbedded = tf.nn.embedding_lookup(self.W, self.inputX) # with tf.name_scope("positionEmbedding"): # print(self.wordEmbedded) # self.positionEmbedded = self._positionEmbedding() # self.embeddedWords = self.wordEmbedded + self.positionEmbedded # with tf.name_scope("transformer"): # for i in range(config.model.numBlocks): # with tf.name_scope("transformer-{}".format(i + 1)): # # 維度[batch_size, sequence_length, embedding_size] # multiHeadAtt = self._multiheadAttention(rawKeys=self.wordEmbedded, queries=self.embeddedWords, # keys=self.embeddedWords) # # 維度[batch_size, sequence_length, embedding_size] # self.embeddedWords = self._feedForward(multiHeadAtt, [config.model.filters, config.model.embeddingSize]) # outputs = tf.reshape(self.embeddedWords, [-1, config.sequenceLength * (config.model.embeddingSize)]) # outputSize = outputs.get_shape()[-1].value with tf.name_scope("dropout"): outputs = tf.nn.dropout(outputs, keep_prob=self.dropoutKeepProb) # 全鏈接層的輸出 with tf.name_scope("output"): outputW = tf.get_variable( "outputW", shape=[outputSize, config.numClasses], initializer=tf.contrib.layers.xavier_initializer()) outputB= tf.Variable(tf.constant(0.1, shape=[config.numClasses]), name="outputB") l2Loss += tf.nn.l2_loss(outputW) l2Loss += tf.nn.l2_loss(outputB) self.logits = tf.nn.xw_plus_b(outputs, outputW, outputB, name="logits") if config.numClasses == 1: self.predictions = tf.cast(tf.greater_equal(self.logits, 0.0), tf.float32, name="predictions") elif config.numClasses > 1: self.predictions = tf.argmax(self.logits, axis=-1, name="predictions") # 計算二元交叉熵損失 with tf.name_scope("loss"): if config.numClasses == 1: losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=tf.cast(tf.reshape(self.inputY, [-1, 1]), dtype=tf.float32)) elif config.numClasses > 1: losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=self.inputY) self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss def _layerNormalization(self, inputs, scope="layerNorm"): # LayerNorm層和BN層有所不一樣 epsilon = self.config.model.epsilon inputsShape = inputs.get_shape() # [batch_size, sequence_length, embedding_size] paramsShape = inputsShape[-1:] # LayerNorm是在最後的維度上計算輸入的數據的均值和方差,BN層是考慮全部維度的 # mean, variance的維度都是[batch_size, sequence_len, 1] mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True) beta = tf.Variable(tf.zeros(paramsShape)) gamma = tf.Variable(tf.ones(paramsShape)) normalized = (inputs - mean) / ((variance + epsilon) ** .5) outputs = gamma * normalized + beta return outputs def _multiheadAttention(self, rawKeys, queries, keys, numUnits=None, causality=False, scope="multiheadAttention"): # rawKeys 的做用是爲了計算mask時用的,由於keys是加上了position embedding的,其中不存在padding爲0的值 numHeads = self.config.model.numHeads keepProp = self.config.model.keepProp if numUnits is None: # 如果沒傳入值,直接去輸入數據的最後一維,即embedding size. numUnits = queries.get_shape().as_list()[-1] # tf.layers.dense能夠作多維tensor數據的非線性映射,在計算self-Attention時,必定要對這三個值進行非線性映射, # 其實這一步就是論文中Multi-Head Attention中的對分割後的數據進行權重映射的步驟,咱們在這裏先映射後分割,原則上是同樣的。 # Q, K, V的維度都是[batch_size, sequence_length, embedding_size] Q = tf.layers.dense(queries, numUnits, activation=tf.nn.relu) K = tf.layers.dense(keys, numUnits, activation=tf.nn.relu) V = tf.layers.dense(keys, numUnits, activation=tf.nn.relu) # 將數據按最後一維分割成num_heads個, 而後按照第一維拼接 # Q, K, V 的維度都是[batch_size * numHeads, sequence_length, embedding_size/numHeads] Q_ = tf.concat(tf.split(Q, numHeads, axis=-1), axis=0) K_ = tf.concat(tf.split(K, numHeads, axis=-1), axis=0) V_ = tf.concat(tf.split(V, numHeads, axis=-1), axis=0) # 計算keys和queries之間的點積,維度[batch_size * numHeads, queries_len, key_len], 後兩維是queries和keys的序列長度 similary = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # 對計算的點積進行縮放處理,除以向量長度的根號值 scaledSimilary = similary / (K_.get_shape().as_list()[-1] ** 0.5) # 在咱們輸入的序列中會存在padding這個樣的填充詞,這種詞應該對最終的結果是毫無幫助的,原則上說當padding都是輸入0時, # 計算出來的權重應該也是0,可是在transformer中引入了位置向量,當和位置向量相加以後,其值就不爲0了,所以在添加位置向量 # 以前,咱們須要將其mask爲0。雖然在queries中也存在這樣的填充詞,但原則上模型的結果之和輸入有關,並且在self-Attention中 # queryies = keys,所以只要一方爲0,計算出的權重就爲0。 # 具體關於key mask的介紹能夠看看這裏: https://github.com/Kyubyong/transformer/issues/3 # 利用tf,tile進行張量擴張, 維度[batch_size * numHeads, keys_len] keys_len = keys 的序列長度 keyMasks = tf.tile(rawKeys, [numHeads, 1]) # 增長一個維度,並進行擴張,獲得維度[batch_size * numHeads, queries_len, keys_len] keyMasks = tf.tile(tf.expand_dims(keyMasks, 1), [1, tf.shape(queries)[1], 1]) # tf.ones_like生成元素全爲1,維度和scaledSimilary相同的tensor, 而後獲得負無窮大的值 paddings = tf.ones_like(scaledSimilary) * (-2 ** (32 + 1)) # tf.where(condition, x, y),condition中的元素爲bool值,其中對應的True用x中的元素替換,對應的False用y中的元素替換 # 所以condition,x,y的維度是同樣的。下面就是keyMasks中的值爲0就用paddings中的值替換 maskedSimilary = tf.where(tf.equal(keyMasks, 0), paddings, scaledSimilary) # 維度[batch_size * numHeads, queries_len, key_len] # 在計算當前的詞時,只考慮上文,不考慮下文,出如今Transformer Decoder中。在文本分類時,能夠只用Transformer Encoder。 # Decoder是生成模型,主要用在語言生成中 if causality: diagVals = tf.ones_like(maskedSimilary[0, :, :]) # [queries_len, keys_len] tril = tf.contrib.linalg.LinearOperatorTriL(diagVals).to_dense() # [queries_len, keys_len] masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(maskedSimilary)[0], 1, 1]) # [batch_size * numHeads, queries_len, keys_len] paddings = tf.ones_like(masks) * (-2 ** (32 + 1)) maskedSimilary = tf.where(tf.equal(masks, 0), paddings, maskedSimilary) # [batch_size * numHeads, queries_len, keys_len] # 經過softmax計算權重係數,維度 [batch_size * numHeads, queries_len, keys_len] weights = tf.nn.softmax(maskedSimilary) # 加權和獲得輸出值, 維度[batch_size * numHeads, sequence_length, embedding_size/numHeads] outputs = tf.matmul(weights, V_) # 將多頭Attention計算的獲得的輸出重組成最初的維度[batch_size, sequence_length, embedding_size] outputs = tf.concat(tf.split(outputs, numHeads, axis=0), axis=2) outputs = tf.nn.dropout(outputs, keep_prob=keepProp) # 對每一個subLayers創建殘差鏈接,即H(x) = F(x) + x outputs += queries # normalization 層 outputs = self._layerNormalization(outputs) return outputs def _feedForward(self, inputs, filters, scope="multiheadAttention"): # 在這裏的前向傳播採用卷積神經網絡 # 內層 params = {"inputs": inputs, "filters": filters[0], "kernel_size": 1, "activation": tf.nn.relu, "use_bias": True} outputs = tf.layers.conv1d(**params) # 外層 params = {"inputs": outputs, "filters": filters[1], "kernel_size": 1, "activation": None, "use_bias": True} # 這裏用到了一維卷積,實際上卷積核尺寸仍是二維的,只是只須要指定高度,寬度和embedding size的尺寸一致 # 維度[batch_size, sequence_length, embedding_size] outputs = tf.layers.conv1d(**params) # 殘差鏈接 outputs += inputs # 歸一化處理 outputs = self._layerNormalization(outputs) return outputs def _positionEmbedding(self, scope="positionEmbedding"): # 生成可訓練的位置向量 batchSize = self.config.batchSize sequenceLen = self.config.sequenceLength embeddingSize = self.config.model.embeddingSize # 生成位置的索引,並擴張到batch中全部的樣本上 positionIndex = tf.tile(tf.expand_dims(tf.range(sequenceLen), 0), [batchSize, 1]) # 根據正弦和餘弦函數來得到每一個位置上的embedding的第一部分 positionEmbedding = np.array([[pos / np.power(10000, (i-i%2) / embeddingSize) for i in range(embeddingSize)] for pos in range(sequenceLen)]) # 而後根據奇偶性分別用sin和cos函數來包裝 positionEmbedding[:, 0::2] = np.sin(positionEmbedding[:, 0::2]) positionEmbedding[:, 1::2] = np.cos(positionEmbedding[:, 1::2]) # 將positionEmbedding轉換成tensor的格式 positionEmbedding_ = tf.cast(positionEmbedding, dtype=tf.float32) # 獲得三維的矩陣[batchSize, sequenceLen, embeddingSize] positionEmbedded = tf.nn.embedding_lookup(positionEmbedding_, positionIndex) return positionEmbedded
8 定義計算metrics的函數
""" 定義各種性能指標 """ def mean(item: list) -> float: """ 計算列表中元素的平均值 :param item: 列表對象 :return: """ res = sum(item) / len(item) if len(item) > 0 else 0 return res def accuracy(pred_y, true_y): """ 計算二類和多類的準確率 :param pred_y: 預測結果 :param true_y: 真實結果 :return: """ if isinstance(pred_y[0], list): pred_y = [item[0] for item in pred_y] corr = 0 for i in range(len(pred_y)): if pred_y[i] == true_y[i]: corr += 1 acc = corr / len(pred_y) if len(pred_y) > 0 else 0 return acc def binary_precision(pred_y, true_y, positive=1): """ 二類的精確率計算 :param pred_y: 預測結果 :param true_y: 真實結果 :param positive: 正例的索引表示 :return: """ corr = 0 pred_corr = 0 for i in range(len(pred_y)): if pred_y[i] == positive: pred_corr += 1 if pred_y[i] == true_y[i]: corr += 1 prec = corr / pred_corr if pred_corr > 0 else 0 return prec def binary_recall(pred_y, true_y, positive=1): """ 二類的召回率 :param pred_y: 預測結果 :param true_y: 真實結果 :param positive: 正例的索引表示 :return: """ corr = 0 true_corr = 0 for i in range(len(pred_y)): if true_y[i] == positive: true_corr += 1 if pred_y[i] == true_y[i]: corr += 1 rec = corr / true_corr if true_corr > 0 else 0 return rec def binary_f_beta(pred_y, true_y, beta=1.0, positive=1): """ 二類的f beta值 :param pred_y: 預測結果 :param true_y: 真實結果 :param beta: beta值 :param positive: 正例的索引表示 :return: """ precision = binary_precision(pred_y, true_y, positive) recall = binary_recall(pred_y, true_y, positive) try: f_b = (1 + beta * beta) * precision * recall / (beta * beta * precision + recall) except: f_b = 0 return f_b def multi_precision(pred_y, true_y, labels): """ 多類的精確率 :param pred_y: 預測結果 :param true_y: 真實結果 :param labels: 標籤列表 :return: """ if isinstance(pred_y[0], list): pred_y = [item[0] for item in pred_y] precisions = [binary_precision(pred_y, true_y, label) for label in labels] prec = mean(precisions) return prec def multi_recall(pred_y, true_y, labels): """ 多類的召回率 :param pred_y: 預測結果 :param true_y: 真實結果 :param labels: 標籤列表 :return: """ if isinstance(pred_y[0], list): pred_y = [item[0] for item in pred_y] recalls = [binary_recall(pred_y, true_y, label) for label in labels] rec = mean(recalls) return rec def multi_f_beta(pred_y, true_y, labels, beta=1.0): """ 多類的f beta值 :param pred_y: 預測結果 :param true_y: 真實結果 :param labels: 標籤列表 :param beta: beta值 :return: """ if isinstance(pred_y[0], list): pred_y = [item[0] for item in pred_y] f_betas = [binary_f_beta(pred_y, true_y, beta, label) for label in labels] f_beta = mean(f_betas) return f_beta def get_binary_metrics(pred_y, true_y, f_beta=1.0): """ 獲得二分類的性能指標 :param pred_y: :param true_y: :param f_beta: :return: """ acc = accuracy(pred_y, true_y) recall = binary_recall(pred_y, true_y) precision = binary_precision(pred_y, true_y) f_beta = binary_f_beta(pred_y, true_y, f_beta) return acc, recall, precision, f_beta def get_multi_metrics(pred_y, true_y, labels, f_beta=1.0): """ 獲得多分類的性能指標 :param pred_y: :param true_y: :param labels: :param f_beta: :return: """ acc = accuracy(pred_y, true_y) recall = multi_recall(pred_y, true_y, labels) precision = multi_precision(pred_y, true_y, labels) f_beta = multi_f_beta(pred_y, true_y, labels, f_beta) return acc, recall, precision, f_beta
9 訓練模型
在訓練時,咱們定義了tensorBoard的輸出,並定義了兩種模型保存的方法。
# 訓練模型 # 生成訓練集和驗證集 trainReviews = data.trainReviews trainLabels = data.trainLabels evalReviews = data.evalReviews evalLabels = data.evalLabels wordEmbedding = data.wordEmbedding labelList = data.labelList embeddedPosition = fixedPositionEmbedding(config.batchSize, config.sequenceLength) # 定義計算圖 with tf.Graph().as_default(): session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False) session_conf.gpu_options.allow_growth=True session_conf.gpu_options.per_process_gpu_memory_fraction = 0.9 # 配置gpu佔用率 sess = tf.Session(config=session_conf) # 定義會話 with sess.as_default(): transformer = Transformer(config, wordEmbedding) globalStep = tf.Variable(0, name="globalStep", trainable=False) # 定義優化函數,傳入學習速率參數 optimizer = tf.train.AdamOptimizer(config.training.learningRate) # 計算梯度,獲得梯度和變量 gradsAndVars = optimizer.compute_gradients(transformer.loss) # 將梯度應用到變量下,生成訓練器 trainOp = optimizer.apply_gradients(gradsAndVars, global_step=globalStep) # 用summary繪製tensorBoard gradSummaries = [] for g, v in gradsAndVars: if g is not None: tf.summary.histogram("{}/grad/hist".format(v.name), g) tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g)) outDir = os.path.abspath(os.path.join(os.path.curdir, "summarys")) print("Writing to {}\n".format(outDir)) lossSummary = tf.summary.scalar("loss", transformer.loss) summaryOp = tf.summary.merge_all() trainSummaryDir = os.path.join(outDir, "train") trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir, sess.graph) evalSummaryDir = os.path.join(outDir, "eval") evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir, sess.graph) # 初始化全部變量 saver = tf.train.Saver(tf.global_variables(), max_to_keep=5) # 保存模型的一種方式,保存爲pb文件 savedModelPath = "../model/transformer/savedModel" if os.path.exists(savedModelPath): os.rmdir(savedModelPath) builder = tf.saved_model.builder.SavedModelBuilder(savedModelPath) sess.run(tf.global_variables_initializer()) def trainStep(batchX, batchY): """ 訓練函數 """ feed_dict = { transformer.inputX: batchX, transformer.inputY: batchY, transformer.dropoutKeepProb: config.model.dropoutKeepProb, transformer.embeddedPosition: embeddedPosition } _, summary, step, loss, predictions = sess.run( [trainOp, summaryOp, globalStep, transformer.loss, transformer.predictions], feed_dict) if config.numClasses == 1: acc, recall, prec, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY) elif config.numClasses > 1: acc, recall, prec, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY, labels=labelList) trainSummaryWriter.add_summary(summary, step) return loss, acc, prec, recall, f_beta def devStep(batchX, batchY): """ 驗證函數 """ feed_dict = { transformer.inputX: batchX, transformer.inputY: batchY, transformer.dropoutKeepProb: 1.0, transformer.embeddedPosition: embeddedPosition } summary, step, loss, predictions = sess.run( [summaryOp, globalStep, transformer.loss, transformer.predictions], feed_dict) if config.numClasses == 1: acc, recall, prec, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY) elif config.numClasses > 1: acc, recall, prec, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY, labels=labelList) trainSummaryWriter.add_summary(summary, step) return loss, acc, prec, recall, f_beta for i in range(config.training.epoches): # 訓練模型 print("start training model") for batchTrain in nextBatch(trainReviews, trainLabels, config.batchSize): loss, acc, prec, recall, f_beta = trainStep(batchTrain[0], batchTrain[1]) currentStep = tf.train.global_step(sess, globalStep) print("train: step: {}, loss: {}, acc: {}, recall: {}, precision: {}, f_beta: {}".format( currentStep, loss, acc, recall, prec, f_beta)) if currentStep % config.training.evaluateEvery == 0: print("\nEvaluation:") losses = [] accs = [] f_betas = [] precisions = [] recalls = [] for batchEval in nextBatch(evalReviews, evalLabels, config.batchSize): loss, acc, precision, recall, f_beta = devStep(batchEval[0], batchEval[1]) losses.append(loss) accs.append(acc) f_betas.append(f_beta) precisions.append(precision) recalls.append(recall) time_str = datetime.datetime.now().isoformat() print("{}, step: {}, loss: {}, acc: {},precision: {}, recall: {}, f_beta: {}".format(time_str, currentStep, mean(losses), mean(accs), mean(precisions), mean(recalls), mean(f_betas))) if currentStep % config.training.checkpointEvery == 0: # 保存模型的另外一種方法,保存checkpoint文件 path = saver.save(sess, "../model/Transformer/model/my-model", global_step=currentStep) print("Saved model checkpoint to {}\n".format(path)) inputs = {"inputX": tf.saved_model.utils.build_tensor_info(transformer.inputX), "keepProb": tf.saved_model.utils.build_tensor_info(transformer.dropoutKeepProb)} outputs = {"predictions": tf.saved_model.utils.build_tensor_info(transformer.predictions)} prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs=outputs, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME) legacy_init_op = tf.group(tf.tables_initializer(), name="legacy_init_op") builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING], signature_def_map={"predict": prediction_signature}, legacy_init_op=legacy_init_op) builder.save()