keras

keras

keras做爲深度學習框架,爲用戶提供簡潔友好的接口,後端能夠使用TensorFlow, Theano, CNTK。html

1.序貫模型

是線性的,「一條路走到黑」時的網絡能夠使用序貫模型。python

from keras .models import Sequential
from keras.layers import Dense,Activation
model = Sequential([Dense(32,units = 784),Activation('relu'),Dense(10),Activation('softmax'),])

也能夠經過add函數添加網絡層:git

model = Sequential()
model.add(Dense(32, input_shape=(784,)))
model.add(Activation('relu'))
  • 第一層須要指定輸入數據的shape,後續層會自動推導出數據的shape。batch_size參數是不須要給出的,若是是B x H x W x C這樣的數據,input_shape=(H, W, C)就好。後端

  • 使用compile()進行參數配置,compile接收優化器,loss, 以及評估標準, optmizer, loss能夠接收字符串或者對象,評估指標須要傳入一個列表,metrics=['accuracy']。網絡

  • 最後使用fit()進行模型的訓練,最簡單的只需傳入model.fit(data, label)便可,數據是numpy.ndarray形式,也能夠指定其它參數,如epochs=100, batch_size=64, validation_split=0.1, shuffle=True, verbose=1, initial_epoch=0。注意這裏shuffle和validation_split時,先進行split,而後才分別對train, val數據進行shuffle,因此要注意不要validation中全部類別都是同一類。框架

2. 函數式模型

在使用多輸出模型、非循環有向模型或具備共享層的模型等複雜模型時,序貫模型沒法解決這個問題,這時須要使用函數式模型,固然函數式模式也能夠處理序貫模型問題。ide

須要使用Input指定輸入大小,以後經過不斷使用網絡層做用獲得最後的輸出,在Model()裏指定網絡的inputs, outputs便可。函數

from keras.layers import Input, Dense
from keras.models import Model

# This returns a tensor
inputs = Input(shape=(784,))

# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)

# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(data, labels)  # starts training

多輸入和多輸出模型:學習

from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model

# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
# Note that we can name any layer by passing it a "name" argument.
main_input = Input(shape=(100,), dtype='int32', name='main_input')

# This embedding layer will encode the input sequence
# into a sequence of dense 512-dimensional vectors.
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)

# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(32)(x)

auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)

auxiliary_input = Input(shape=(5,), name='aux_input')
x = keras.layers.concatenate([lstm_out, auxiliary_input])

# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

# And finally we add the main logistic regression layer
main_output = Dense(1, activation='sigmoid', name='main_output')(x)

# 兩個輸入,兩個輸出
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
              loss_weights=[1., 0.2])
model.fit([headline_data, additional_data], [labels, labels],
          epochs=50, batch_size=32)

由於前面指定了輸入、輸出層的名字,能夠經過字典去傳遞相關參數:優化

# 在complie時,要配置output使用的loss,以及多個輸出的loss權重
model.compile(optimizer='rmsprop',
              loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},
              loss_weights={'main_output': 1., 'aux_output': 0.2})

# 在fit時,要給input數據,給output標籤,這樣才能訓練
model.fit({'main_input': headline_data, 'aux_input': additional_data},
          {'main_output': labels, 'aux_output': labels},
          epochs=50, batch_size=32)

共享層:

任務:判斷兩條微博是否來自同一我的,須要同時輸入兩個數據,咱們想利用一個共享層同時處理這兩條數據:

a = Input(shape=(32, 32, 3))
b = Input(shape=(64, 64, 3))

conv = Conv2D(16, (3, 3), padding='same')
conved_a = conv(a)

# Only one input so far, the following will work:
assert conv.input_shape == (None, 32, 32, 3)

conved_b = conv(b)
# now the `.input_shape` property wouldn't work, but this does:

同時用一個conv去卷積,須要加上如下兩句,表示不一樣結點:

assert conv.get_input_shape_at(0) == (None, 32, 32, 3)
assert conv.get_input_shape_at(1) == (None, 64, 64, 3)

inception模型:

將多個卷積操做合併。

from keras.layers import Conv2D, MaxPooling2D, Input

input_img = Input(shape=(256, 256, 3))

tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)

tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)

tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)

output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)

殘差鏈接:

from keras.layers import Conv2D, Input

# input tensor for a 3-channel 256x256 image
x = Input(shape=(256, 256, 3))
# 3x3 conv with 3 output channels (same as input channels)
y = Conv2D(3, (3, 3), padding='same')(x)
# this returns x + y.
z = keras.layers.add([x, y])

共享視覺模型:

from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten
from keras.models import Model

# First, define the vision modules
digit_input = Input(shape=(27, 27, 1))
x = Conv2D(64, (3, 3))(digit_input)
x = Conv2D(64, (3, 3))(x)
x = MaxPooling2D((2, 2))(x)
out = Flatten()(x)

vision_model = Model(digit_input, out)

# Then define the tell-digits-apart model
digit_a = Input(shape=(27, 27, 1))
digit_b = Input(shape=(27, 27, 1))

# The vision model will be shared, weights and all
out_a = vision_model(digit_a)
out_b = vision_model(digit_b)

concatenated = keras.layers.concatenate([out_a, out_b])
out = Dense(1, activation='sigmoid')(concatenated)

classification_model = Model([digit_a, digit_b], out)

視覺問答模型:

from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.layers import Input, LSTM, Embedding, Dense
from keras.models import Model, Sequential

# First, let's define a vision model using a Sequential model.
# This model will encode an image into a vector.
vision_model = Sequential()
vision_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)))
vision_model.add(Conv2D(64, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(128, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Flatten())

# Now let's get a tensor with the output of our vision model:
image_input = Input(shape=(224, 224, 3))
encoded_image = vision_model(image_input)

# Next, let's define a language model to encode the question into a vector.
# Each question will be at most 100 word long,
# and we will index words as integers from 1 to 9999.
question_input = Input(shape=(100,), dtype='int32')
embedded_question = Embedding(input_dim=10000, output_dim=256, input_length=100)(question_input)
encoded_question = LSTM(256)(embedded_question)

# Let's concatenate the question vector and the image vector:
merged = keras.layers.concatenate([encoded_question, encoded_image])

# And let's train a logistic regression over 1000 words on top:
output = Dense(1000, activation='softmax')(merged)

# This is our final model:
vqa_model = Model(inputs=[image_input, question_input], outputs=output)

# The next stage would be training this model on actual data.

視頻問答模型:

在原來的視覺模型加入LSTM,就能夠解決視頻問題。

from keras.layers import TimeDistributed

video_input = Input(shape=(100, 224, 224, 3))
# This is our video encoded via the previously trained vision_model (weights are reused)
encoded_frame_sequence = TimeDistributed(vision_model)(video_input)  # the output will be a sequence of vectors
encoded_video = LSTM(256)(encoded_frame_sequence)  # the output will be a vector

# This is a model-level representation of the question encoder, reusing the same weights as before:
question_encoder = Model(inputs=question_input, outputs=encoded_question)

# Let's use it to encode the question:
video_question_input = Input(shape=(100,), dtype='int32')
encoded_video_question = question_encoder(video_question_input)

# And this is our video question answering model:
merged = keras.layers.concatenate([encoded_video, encoded_video_question])
output = Dense(1000, activation='softmax')(merged)
video_qa_model = Model(inputs=[video_input, video_question_input], outputs=output)

API:

model.compile()中提供了loss_weights用於多個loss進行加權,如loss_weight=[0.8, 0.2]。

compile(self, optimizer, loss, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)

model.fit()中class_weight將不一樣類別給予不一樣損失函數權重,steps_per_epoch表示一個epoch裏須要多少次迭代,默認爲None時值爲(nb_samples + batch_size - 1) // batch_size。validation_data提供驗證集,能夠覆蓋validation_split。

fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)

利用Python的生成器,逐個生成數據的batch並進行訓練。生成器與模型將並行執行以提升效率。例如,該函數容許咱們在CPU上進行實時的數據預處理,同時在GPU上進行模型訓練:

  • steps_per_epoch: 1個epoch中應包含的迭代數
  • workers: 處理數據進程數
  • max_q_size:生成器隊列的最大容量
fit_generator(self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_q_size=10, workers=1, pickle_safe=False, initial_epoch=0)

例子:

def generate_arrays_from_file(path):
    while 1:
    f = open(path)
    for line in f:
        # create numpy arrays of input data
        # and labels, from each line in the file
        x1, x2, y = process_line(line)
        yield ({'input_1': x1, 'input_2': x2}, {'output': y})
    f.close()

model.fit_generator(generate_arrays_from_file('/my_file.txt'),
        steps_per_epoch=10000, epochs=10)

predict_generator參數與fit_generator差很少,須要傳入genrator, steps。

predict_generator(self, generator, steps, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)

Lambda層:
使用keras.layers.core.Lambda自定義層。

from keras.layers.core import Lambda

def lambda_ctc_func(x):
    y_pred, labels, pred_len, label_len = x
    pred = pred[:, :, 0, :]
    return K.ctc_batch_cost(labels, y_pred, pred_len, label_len)

input_tensor, y_pred = build_model(args.img_size[0], args.num_channels)
loss_out = Lambda(lambda_ctc_func, name="ctc_loss" )([y_pred, labels, pred_len, label_len])

model = Model(inputs=[input_tensor, labels, pred_len, label_len], outputs=loss_out)
model.compile(loss: {'ctc_loss': lambda label, loss_out: loss_out})

compile中的loss函數須要接收兩個函數,一個是傳入的ground truth label,一個是網絡的輸出,通常loss函數經過這二者計算loss,但咱們的loss計算已經在Lambda層中計算了,因此網絡的輸出loss_out就是最後的loss值,這裏使用lambda表達式直接將這個值輸出做爲最後的損失值。

另外在使用這個網絡時,因爲loss_out中並無參數須要載入,咱們新建模型時只須要把以前網絡輸出的y_pred拿出來就行了,

model = Model(inputs=input_tensor, outputs=y_pred)
model.load_weights('model.h5')

BatchNormalization層:

該層在每一個batch上將前一層的激活值從新規範化,即便得其輸出數據的均值接近0,其標準差接近1。

keras.layers.normalization.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None)

BN層的做用:

  1. 加速收斂
  2. 控制過擬合,能夠少用或不用Dropout和正則
  3. 下降網絡對初始化權重不敏感
  4. 容許使用較大的學習率

ImageDataGenerator:

keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
    samplewise_center=False,
    featurewise_std_normalization=False,
    samplewise_std_normalization=False,
    zca_whitening=False,
    zca_epsilon=1e-6,
    rotation_range=0.,
    width_shift_range=0.,
    height_shift_range=0.,
    shear_range=0.,
    zoom_range=0.,
    channel_shift_range=0.,
    fill_mode='nearest',
    cval=0.,
    horizontal_flip=False,
    vertical_flip=False,
    rescale=None,
    preprocessing_function=None,
    data_format=K.image_data_format())

rotation_range: 偏轉角度 zoom_range: 縮放比例

相關文章
相關標籤/搜索