TensorFlow2教程20:自編碼器

  自動編碼器的兩個主要組成部分; 編碼器和解碼器學習

  編碼器將輸入壓縮成一小組「編碼」(一般,編碼器輸出的維數遠小於編碼器輸入)編碼

  解碼器而後將編碼器輸出擴展爲與編碼器輸入具備相同維度的輸出spa

  換句話說,自動編碼器旨在「重建」輸入,同時學習數據的有限表示(即「編碼」)code

  1.導入數據blog

  (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()get

  x_train = x_train.reshape((-1, 28*28)) / 255.0input

  x_test = x_test.reshape((-1, 28*28)) / 255.0it

  print(x_train.shape, ' ', y_train.shape)io

  print(x_test.shape, ' ', y_test.shape)test

  (60000, 784) (60000,)

  (10000, 784) (10000,)

  2.簡單的自編碼器

  code_dim = 32

  inputs = layers.Input(shape=(x_train.shape[1],), name='inputs')

  code = layers.Dense(code_dim, activation='relu', name='code')(inputs)

  outputs = layers.Dense(x_train.shape[1], activation='softmax', name='outputs')(code)

  auto_encoder = keras.Model(inputs, outputs)

  auto_encoder.summary()

  Model: "model"

  _________________________________________________________________

  Layer (type) Output Shape Param #

  =================================================================

  inputs (InputLayer) [(None, 784)] 0

  _________________________________________________________________

  code (Dense) (None, 32) 25120

  _________________________________________________________________

  outputs (Dense) (None, 784) 25872

  =================================================================

  Total params: 50,992

  Trainable params: 50,992

  Non-trainable params: 0

  _________________________________________________________________

  keras.utils.plot_model(auto_encoder, show_shapes=True)

  

png

 

  encoder = keras.Model(inputs,code)

  keras.utils.plot_model(encoder, show_shapes=True)

  

png

 

  decoder_input = keras.Input((code_dim,))

  decoder_output = auto_encoder.layers[-1](decoder_input)

  decoder = keras.Model(decoder_input, decoder_output)

  keras.utils.plot_model(decoder, show_shapes=True)

  

png

       無錫婦科醫院哪家好 http://www.xasgfk.cn/

  auto_encoder.compile(optimizer='adam',

  loss='binary_crossentropy')

  訓練模型

  %%time

  history = auto_encoder.fit(x_train, x_train, batch_size=64, epochs=100, validation_split=0.1)

  Train on 54000 samples, validate on 6000 samples

  Epoch 1/100

  Epoch 100/100

  54000/54000 [==============================] - 2s 45us/sample - loss: 0.6715 - val_loss: 0.6688

  CPU times: user 6min 53s, sys: 23.2 s, total: 7min 16s

  Wall time: 4min 24s

  encoded = encoder.predict(x_test)

  decoded = decoder.predict(encoded)

  import matplotlib.pyplot as plt

  plt.figure(figsize=(10,4))

  n = 5

  for i in range(n):

  ax = plt.subplot(2, n, i+1)

  plt.imshow(x_test[i].reshape(28,28))

  plt.gray()

  ax.get_xaxis().set_visible(False)

  ax.get_yaxis().set_visible(False)

  ax = plt.subplot(2, n, n+i+1)

  plt.imshow(decoded[i].reshape(28,28))

  plt.gray()

  ax.get_xaxis().set_visible(False)

  ax.get_yaxis().set_visible(False)

  plt.show()

相關文章
相關標籤/搜索