緣由可能以下:python
- 學習率可能太大
- batch size過小
- 樣本分佈不均勻
- 缺乏加入正則化
- 數據規模較小
一種很重要的緣由是數據split的時候沒有shufflemysql
import numpy as np
index = np.arange(data.shape[0])
np.random.seed(1024)
np.random.shuffle(index)
data=data[index]
labels=labels[index]
緣由:輸入的訓練數據沒有歸一化形成
解決方法:把輸入數值經過下面的函數過濾一遍,進行歸一化web
#數據歸一化
def data_in_one(inputdata):
inputdata = (inputdata-inputdata.min())/(inputdata.max()-inputdata.min())
return inputdata
- train loss 不斷降低,test loss不斷降低,說明網絡仍在學習;
- train loss 不斷降低,test loss趨於不變,說明網絡過擬合;
- train loss 趨於不變,test loss不斷降低,說明數據集100%有問題;
- train loss 趨於不變,test loss趨於不變,說明學習遇到瓶頸,須要減少學習率或批量數目;
- train loss 不斷上升,test loss不斷上升,說明網絡結構設計不當,訓練超參數設置不當,數據集通過清洗等問題。
通常來講,較高的acc對應的loss較低,但這不是絕對,畢竟他們是兩個不一樣的東西,因此在實際實現中,咱們能夠對二者進行一個微調。sql
關於epoch設置問題,咱們能夠設置回調函數,選擇驗證集最高的acc做爲最優模型。數據庫
關於BN和dropout,其實這兩個是兩個徹底不一樣的東西,BN針對數據分佈,dropout是從模型結構方面優化,因此他們兩個能夠一塊兒使用,對於BN來講其不但能夠防止過擬合,還能夠防止梯度消失等問題,而且能夠加快模型的收斂速度,可是加了BN,模型訓練每每會變得慢些。json
代碼示意:網絡
……
from keras.layers import Concatenate,Dropout
……
concatenate = Concatenate(axis=2)([blstm,embedding_layer])
concatenate=Dropout(rate=0.1)(concatenate)
下面這段代碼是我對本身實驗數據作的augmentation,能夠給你們提供一個參考。首先,個人數據集如圖所示:
app
個人數據庫中的essays表中有7列,每一行爲一個數據樣本,其中第一列AUTHID爲樣本編號,TEXT爲文本內容,後面爲文本的標記。對於文本的augmentation,一個比較合理的擴增數據集的方法就是將每個文本的句子循環移位,這樣能夠最大限度地保證文本總體的穩定。下面的代碼讀取essays表格中的樣本信息,對文本進行循環移位後存入到table_augment表中。dom
代碼示意:svg
#!/usr/bin/python
# -*- coding:utf8 -*-
from sqlalchemy import create_engine # mysql orm interface,better than mysqldb
import pandas as pd
import spacy # a NLP model like NLTK,but more industrial.
import json
TO_SQL='table_augment'
READ_SQL_TABLE='essays'
def cut_sentences(df):
all_text_name = df["AUTHID"] # type pandas.Series:get all text name(match the "#AUTHID" in essays)
all_text = df["TEXT"] # type pandas.Series:get all text(match the "TEXT" in essays)
all_label_cEXT=df["cEXT"]
all_label_cNEU=df["cNEU"]
all_label_cAGR=df["cAGR"]
all_label_cCON=df["cCON"]
all_label_cOPN=df["cOPN"]
all_number = all_text_name.index[-1] # from 0 to len(all_text_name)-1
for i in xrange(0,all_number+1,1):
print("start to deal with text ", i ," ...")
text = all_text[i] # type str:one of text in all_text
text_name = all_text_name[i] # type str:one of text_name in all_text_name
nlp = spacy.load('en')
test_doc = nlp(text)#.decode('utf8'))
cut_sentence = []
for sent in test_doc.sents: # get each line in the text
cut_sentence.append(sent.text)
""" type sent is spacy.tokens.span.Span, not a string, so, we call the member function Span.text to get its unicode form """
line_number = len(cut_sentence)
for itertor in range(line_number):
if itertor !=0:
cut_sentence=cut_sentence[1:]+cut_sentence[:1]
cut_sentence_json = json.dumps(cut_sentence)
input_data_dic = {'text_name': str(itertor)+"_"+text_name,
'line_number':line_number,
'line_text': cut_sentence_json,
'cEXT': all_label_cEXT[i],
'cNEU': all_label_cNEU[i],
'cAGR': all_label_cAGR[i],
'cCON': all_label_cCON[i],
'cOPN': all_label_cOPN[i]
}
input_data = pd.DataFrame(input_data_dic,index=[i],columns=['text_name',
'line_number',
'line_text',
'cEXT',
'cNEU',
'cAGR',
'cCON',
'cOPN'])
input_data.to_sql(TO_SQL, engine, if_exists='append', index=False, chunksize=100)
""" DataFrame.index will be insert to table by default. We don't want it, so we set the index = False(True default) """
print("text ", i ," finished")
if __name__ =='__main__':
engine = create_engine('mysql+pymysql://root:root@localhost:3306/personality_1?charset=utf8', echo=True,convert_unicode=True)
df = pd.read_sql_table(READ_SQL_TABLE, engine,chunksize=5) # read essays
for df_iter in df:
cut_sentences(df_iter)
具體來說就是model.load人家訓練好的weight.hdf5,而後在這個基礎上繼續訓練。具體能夠見以後的博文中的斷點訓練。
調小學習速率(Learning Rate)以前已經講過不在贅述
適當增大batch_size。以前已經講過不在贅述
試一試別的優化器(optimizer)以前已經講過不在贅述
Keras的回調函數EarlyStopping() 以前已經講過,再也不贅述
正則化方法是指在進行目標函數或代價函數優化時,在目標函數或代價函數後面加上一個正則項,通常有L1正則與L2正則等。
代碼片斷示意:
from keras import regularizers
……
out = TimeDistributed(Dense(hidden_dim_2,
activation="relu",
kernel_regularizer=regularizers.l1_l2(0.01,0.01),
activity_regularizer=regularizers.l1_l2(0.01,0.01)
)
)(concatenate)
……
dense=Dense(200,
activation="relu",
kernel_regularizer=regularizers.l1_l2(0.01,0.01),
activity_regularizer=regularizers.l1_l2(0.01,0.01)
)(dense)
更多參考信息:
https://blog.csdn.net/mrgiovanni/article/details/52167016