吳恩達深度學習筆記 course4 week2 做業1

這周新使用了一個新框架,它是一個比較高級的框架,比起低級框架有更多的限制

使用keras要注意的是:css

1.Keras框架使用的變量名和咱們之前使用的numpy和TensorFlow變量不同。它不是在前向傳播的每一步上建立新變量(好比X, Z1, A1, Z2, A2,…)以便於不一樣層之間的計算。在Keras中,咱們使用X覆蓋了全部的值,沒有保存每一層結果,咱們只須要最新的值,惟一例外的就是X_input,咱們將它分離出來是由於它是輸入的數據,咱們要在最後的建立模型那一步中用到。html

 

Keras tutorial - the Happy House

Welcome to the first assignment of week 2. In this assignment, you will:python

  1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
  2. See how you can in a couple of hours build a deep learning algorithm.

Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.算法

In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!windows

In [8]:
 
 
 
 
 
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
 
 
 

Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).api

 

1 - The Happy House

For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.promise

Figure 1  the Happy House

As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.markdown

You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.網絡

Run the following code to normalize the dataset and learn about its shapes.架構

In [9]:
 
 
 
 
 
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
 
 
 
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)
 

Details of the "Happy" dataset:

  • Images are of shape (64,64,3)
  • Training: 600 pictures
  • Test: 150 pictures

It is now time to solve the "Happy" Challenge.

 

2 - Building a model in Keras

Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.

Here is an example of a model in Keras:

def model(input_shape): # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) # Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) # CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) # MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) # FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) # Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model 

Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as XZ1A1Z2A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).

Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D()GlobalMaxPooling2D()Dropout().

Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.

In [10]:
 
 
 
 
 
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
    """
    Implementation of the HappyModel.
 
           
    Arguments:
    input_shape -- shape of the images of the dataset
    Returns:
    model -- a Model() instance in Keras
    """
    
    ### START CODE HERE ###
    # Feel free to use the suggested outline in the text above to get started, and run through the whole
    # exercise (including the later portions of this notebook) once. The come back also try out other
    # network architectures as well. 
   # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    X_input = Input(input_shape)
    # Zero-Padding: pads the border of X_input with zeroes
    X = ZeroPadding2D((3, 3))(X_input)
    # CONV -> BN -> RELU Block applied to X
    X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)
    # MAXPOOL
    X = MaxPooling2D((2, 2), name='max_pool')(X)
    # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
    X = Flatten()(X)
    X = Dense(1, activation='sigmoid', name='fc')(X)
    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')
    
   
 
           
    ### END CODE HERE ###
    
    return model
 
 
 

You have now built a function to describe your model. To train and test this model, there are four steps in Keras:

  1. Create the model by calling the function above
  2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

If you want to know more about model.compile()model.fit()model.evaluate() and their arguments, refer to the official Keras documentation.

Exercise: Implement step 1, i.e. create the model.

In [15]:
 
 
 
 
 
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
 
 
 

Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.

In [16]:
 
 
 
 
 
### START CODE HERE ### (1 line)
happyModel.compile("Adam","binary_crossentropy",metrics = ["accuracy"])
### END CODE HERE ###
 
 
 

Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.

In [19]:
 
 
 
 
 
### START CODE HERE ### (1 line)
happyModel.fit(x=X_train ,y= Y_train, epochs=30 ,batch_size=64)
### END CODE HERE ###
 
 
 
Epoch 1/30
600/600 [==============================] - 11s - loss: 0.0487 - acc: 0.9867    
Epoch 2/30
600/600 [==============================] - 11s - loss: 0.0492 - acc: 0.9867    
Epoch 3/30
600/600 [==============================] - 11s - loss: 0.0406 - acc: 0.9917    
Epoch 4/30
600/600 [==============================] - 11s - loss: 0.0602 - acc: 0.9817    
Epoch 5/30
600/600 [==============================] - 11s - loss: 0.0604 - acc: 0.9800    
Epoch 6/30
600/600 [==============================] - 11s - loss: 0.0567 - acc: 0.9800    
Epoch 7/30
600/600 [==============================] - 11s - loss: 0.0381 - acc: 0.9867    
Epoch 8/30
600/600 [==============================] - 11s - loss: 0.0363 - acc: 0.9917    
Epoch 9/30
600/600 [==============================] - 11s - loss: 0.0314 - acc: 0.9917    
Epoch 10/30
600/600 [==============================] - 11s - loss: 0.0401 - acc: 0.9883    
Epoch 11/30
600/600 [==============================] - 11s - loss: 0.0263 - acc: 0.9933    
Epoch 12/30
600/600 [==============================] - 11s - loss: 0.0340 - acc: 0.9933    
Epoch 13/30
600/600 [==============================] - 11s - loss: 0.0201 - acc: 0.9967    
Epoch 14/30
600/600 [==============================] - 11s - loss: 0.0229 - acc: 0.9950    
Epoch 15/30
600/600 [==============================] - 11s - loss: 0.0359 - acc: 0.9917    
Epoch 16/30
600/600 [==============================] - 11s - loss: 0.0252 - acc: 0.9900    
Epoch 17/30
600/600 [==============================] - 11s - loss: 0.0213 - acc: 0.9967    
Epoch 18/30
600/600 [==============================] - 11s - loss: 0.0229 - acc: 0.9933    
Epoch 19/30
600/600 [==============================] - 11s - loss: 0.0171 - acc: 0.9950    
Epoch 20/30
600/600 [==============================] - 11s - loss: 0.0478 - acc: 0.9800    
Epoch 21/30
600/600 [==============================] - 11s - loss: 0.0262 - acc: 0.9917    
Epoch 22/30
600/600 [==============================] - 11s - loss: 0.0158 - acc: 0.9950    
Epoch 23/30
600/600 [==============================] - 11s - loss: 0.0187 - acc: 0.9967    
Epoch 24/30
600/600 [==============================] - 11s - loss: 0.0142 - acc: 0.9967    
Epoch 25/30
600/600 [==============================] - 11s - loss: 0.0146 - acc: 0.9983    
Epoch 26/30
600/600 [==============================] - 11s - loss: 0.0135 - acc: 0.9950    
Epoch 27/30
600/600 [==============================] - 11s - loss: 0.0126 - acc: 0.9983    
Epoch 28/30
600/600 [==============================] - 11s - loss: 0.0121 - acc: 0.9967    
Epoch 29/30
600/600 [==============================] - 11s - loss: 0.0113 - acc: 0.9983    
Epoch 30/30
600/600 [==============================] - 11s - loss: 0.0066 - acc: 0.9983    
Out[19]:
<keras.callbacks.History at 0x7fda9c9c2e48>
 

Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.

Exercise: Implement step 4, i.e. test/evaluate the model.

In [22]:
 
 
 
 
 
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x =X_test,y=Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
 
 
 
150/150 [==============================] - 1s     

Loss = 0.108516698082
Test Accuracy = 0.966666664282
 

If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.

To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.

If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:

  • Try using blocks of CONV->BATCHNORM->RELU such as:
    X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) 
    until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
  • You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
  • Change your optimizer. We find Adam works well.
  • If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
  • Run on more epochs, until you see the train accuracy plateauing.

Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.

Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.

 

3 - Conclusion

Congratulations, you have solved the Happy House challenge!

Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.

 

What we would like you to remember from this assignment:

  • Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
  • Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.

 

 

4 - Test with your own image (Optional)

Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:

1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!

The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!

In [23]:
 
 
 
 
 
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
 
 
 
[[ 0.]]
 
 

5 - Other useful functions in Keras (Optional)

Two other basic features of Keras that you'll find useful are:

  • model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
  • plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.

Run the following code.

In [25]:
 
 
 
 
 
happyModel.summary()
 
 
 
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_5 (InputLayer)         (None, 64, 64, 3)         0         
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 70, 70, 3)         0         
_________________________________________________________________
conv0 (Conv2D)               (None, 64, 64, 32)        4736      
_________________________________________________________________
bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
_________________________________________________________________
activation_1 (Activation)    (None, 64, 64, 32)        0         
_________________________________________________________________
max_pool (MaxPooling2D)      (None, 32, 32, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 32768)             0         
_________________________________________________________________
fc (Dense)                   (None, 1)                 32769     
=================================================================
Total params: 37,633
Trainable params: 37,569
Non-trainable params: 64
_________________________________________________________________
In [26]:
 
 
 
 
 
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
 
 
Out[26]:
input_5: InputLayerzero_padding2d_4: ZeroPadding2Dconv0: Conv2Dbn0: BatchNormalizationactivation_1: Activationmax_pool: MaxPooling2Dflatten_1: Flattenfc: Dense
 
 
 
 
--------------------------------------------------------------------------------------中文版-------------------------------------------------------------------------------------------------------
摘自:https://blog.csdn.net/u013733326/article/details/80250818
 

資料下載

  • 下載1:本文所使用的資料已上傳到百度網盤【點擊下載(15.97MB)】,請在開始以前下載好所需資料,或者在本文底部copy資料代碼。

  • 下載2(有償下載):在殘差網絡中博主花費大力氣訓練好了殘差網絡的權值,這部分是須要有償下載的,須要使用C幣進行下載,下載地址:https://download.csdn.net/download/u013733326/10403196


【博主使用的python版本:3.6.2】


1 - Keras 入門 - 笑臉識別

本次咱們將: 
1. 學習到一個高級的神經網絡的框架,可以運行在包括TensorFlow和CNTK的幾個較低級別的框架之上的框架。 
2. 看看如何在幾個小時內創建一個深刻的學習算法。

  爲何咱們要使用Keras框架呢?Keras是爲了使深度學習工程師可以很快地創建和實驗不一樣的模型的框架,正如TensorFlow是一個比Python更高級的框架,Keras是一個更高層次的框架,並提供了額外的抽象方法。最關鍵的是Keras可以以最短的時間讓想法變爲現實。然而,Keras比底層框架更具備限制性,因此有一些很是複雜的模型能夠在TensorFlow中實現,但在Keras中卻沒有(沒有更多困難)。 話雖如此,Keras對許多常見模型都能正常運行。

import numpy as np from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.models import Model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model import kt_utils import keras.backend as K K.set_image_data_format('channels_last') import matplotlib.pyplot as plt from matplotlib.pyplot import imshow %matplotlib inline
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

注意:正如你所看到的,咱們已經從Keras中導入了不少功能, 只需直接調用它們便可輕鬆使用它們。 好比:X = Input(…) 或者X = ZeroPadding2D(…).

1.1 - 任務描述

  下一次放假的時候,你決定和你的五個朋友一塊兒度過一個星期。這是一個很是好的房子,在附近有不少事情要作,但最重要的好處是每一個人在家裏都會感到快樂,因此任何想進入房子的人都必須證實他們目前的幸福狀態。

  做爲一個深度學習的專家,爲了確保「快樂纔開門」規則獲得嚴格的應用,你將創建一個算法,它使用來自前門攝像頭的圖片來檢查這我的是否快樂,只有在人高興的時候,門纔會打開。

happy-house

Figure 1  : the Happy House

 

你收集了你的朋友和你本身的照片,被前門的攝像頭拍了下來。數據集已經標記好了。。

house-members.png

咱們先來加載數據集:

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = kt_utils load_dataset()

# Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Reshape Y_train = Y_train_orig.T Y_test = Y_test_orig.T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

執行結果

number of training examples = 600 number of test examples = 150 X_train shape: (600, 64, 64, 3) Y_train shape: (600, 1) X_test shape: (150, 64, 64, 3) Y_test shape: (150, 1)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

數據集的細節以下:

  • 圖像維度:(64,64,3)
  • 訓練集數量:600
  • 測試集數量:150

1.2 - 使用Keras框架構建模型

Keras很是適合快速製做模型,它能夠在很短的時間內創建一個很優秀的模型,舉個例子:

def model(input_shape): """ 模型大綱 """ #定義一個tensor的placeholder,維度爲input_shape X_input = Input(input_shape) #使用0填充:X_input的周圍填充0 X = ZeroPadding2D((3,3))(X_input) # 對X使用 CONV -> BN -> RELU 塊 X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) #最大值池化層 X = MaxPooling2D((2,2),name="max_pool")(X) #降維,矩陣轉化爲向量 + 全鏈接層 X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) #建立模型,講話建立一個模型的實體,咱們能夠用它來訓練、測試。 model = Model(inputs = X_input, outputs = X, name='HappyModel') return model 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

  請注意:Keras框架使用的變量名和咱們之前使用的numpy和TensorFlow變量不同。它不是在前向傳播的每一步上建立新變量(好比X, Z1, A1, Z2, A2,…)以便於不一樣層之間的計算。在Keras中,咱們使用X覆蓋了全部的值,沒有保存每一層結果,咱們只須要最新的值,惟一例外的就是X_input,咱們將它分離出來是由於它是輸入的數據,咱們要在最後的建立模型那一步中用到。

def HappyModel(input_shape): """ 實現一個檢測笑容的模型 參數: input_shape - 輸入的數據的維度 返回: model - 建立的Keras的模型 """ #你能夠參考和上面的大綱 X_input = Input(input_shape) #使用0填充:X_input的周圍填充0 X = ZeroPadding2D((3, 3))(X_input) #對X使用 CONV -> BN -> RELU 塊 X = Conv2D(32, (7, 7), strides=(1, 1), name='conv0')(X) X = BatchNormalization(axis=3, name='bn0')(X) X = Activation('relu')(X) #最大值池化層 X = MaxPooling2D((2, 2), name='max_pool')(X) #降維,矩陣轉化爲向量 + 全鏈接層 X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) #建立模型,講話建立一個模型的實體,咱們能夠用它來訓練、測試。 model = Model(inputs=X_input, outputs=X, name='HappyModel') return model
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

如今咱們已經設計好了咱們的模型了,要訓練並測試模型咱們須要這麼作:

  1. 建立一個模型實體。
  2. 編譯模型,可使用這個語句:model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. 訓練模型:model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. 評估模型:model.evaluate(x = ..., y = ...)

若是你想要獲取關於model.compile(), model.fit(), model.evaluate()的更多的信息,你能夠參考這裏

#建立一個模型實體 happy_model = HappyModel(X_train.shape[1:]) #編譯模型 happy_model.compile("adam","binary_crossentropy", metrics=['accuracy']) #訓練模型 #請注意,此操做會花費你大約6-10分鐘。 happy_model.fit(X_train, Y_train, epochs=40, batch_size=50) #評估模型 preds = happy_model.evaluate(X_test, Y_test, batch_size=32, verbose=1, sample_weight=None) print ("偏差值 = " + str(preds[0])) print ("準確度 = " + str(preds[1]))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

執行結果

Epoch 1/40 600/600 [==============================] - 12s 19ms/step - loss: 2.2593 - acc: 0.5667 Epoch 2/40 600/600 [==============================] - 9s 16ms/step - loss: 0.5355 - acc: 0.7917 Epoch 3/40 600/600 [==============================] - 10s 17ms/step - loss: 0.3252 - acc: 0.8650 Epoch 4/40 600/600 [==============================] - 10s 17ms/step - loss: 0.2038 - acc: 0.9250 Epoch 5/40 600/600 [==============================] - 10s 16ms/step - loss: 0.1664 - acc: 0.9333 ... Epoch 38/40 600/600 [==============================] - 10s 17ms/step - loss: 0.0173 - acc: 0.9950 Epoch 39/40 600/600 [==============================] - 14s 23ms/step - loss: 0.0365 - acc: 0.9883 Epoch 40/40 600/600 [==============================] - 12s 19ms/step - loss: 0.0291 - acc: 0.9900 150/150 [==============================] - 3s 21ms/step 偏差值 = 0.407454126676 準確度 = 0.840000001589
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

只要準確度大於75%就算正常,若是你的準確度沒有大於75%,你能夠嘗試改變模型:

X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X)
  • 1
  • 2
  • 3
  • 你能夠在每一個塊後面使用最大值池化層,它將會減小寬、高的維度。
  • 改變優化器,這裏咱們使用的是Adam
  • 若是模型難以運行,而且遇到了內存不夠的問題,那麼就下降batch_size(12一般是一個很好的折中方案)
  • 運行更多代,直到看到有良好效果的時候。

即便你已經達到了75%的準確度,你也能夠繼續優化你的模型,以得到更好的結果。

1.3 - 總結

這個任務算是完成了,你能夠在你家試試[手動滑稽]

1.4 - 測試你的圖片

由於對這些數據進行訓練的模型可能或不能處理你本身的圖片,可是你能夠試一試嘛:

#網上隨便找的圖片,侵刪 img_path = 'images/smile.jpeg' img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happy_model.predict(x))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

測試結果

[[ 1.]]
  • 1

img_path = 'images/my_image.jpg' img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happy_model.predict(x))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

測試結果

[[ 0.]]
  • 1

1.5 - 其餘一些有用的功能

  • model.summary():打印出你的每一層的大小細節
  • plot_model() : 繪製出佈局圖
happy_model.summary()
  • 1

執行結果

_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 64, 64, 3) 0 _________________________________________________________________ zero_padding2d_2 (ZeroPaddin (None, 70, 70, 3) 0 _________________________________________________________________ conv0 (Conv2D) (None, 64, 64, 32) 4736 _________________________________________________________________ bn0 (BatchNormalization) (None, 64, 64, 32) 128 _________________________________________________________________ activation_2 (Activation) (None, 64, 64, 32) 0 _________________________________________________________________ max_pool (MaxPooling2D) (None, 32, 32, 32) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 32768) 0 _________________________________________________________________ fc (Dense) (None, 1) 32769 ================================================================= Total params: 37,633 Trainable params: 37,569 Non-trainable params: 64 _________________________________________________________________
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

咱們來繪製一下圖:

天坑: 
1. 請下載並安裝Graphviz的windows版本,而後寫入環境變量,博主的環境變量填的是:E:\Anaconda3\Lib\site-packages\Graphviz\bin,因人而異吧. 
2. 請安裝pydot-ng & graphviz,其代碼CMD代碼爲:pip install pydot-ng & pip install graphviz或者是pip install pydot 與 pip install graphviz 
3. 重啓Jupyter Notebook 【手動微笑】【手動再見】

%matplotlib inline plot_model(happy_model, to_file='happy_model.png') SVG(model_to_dot(happy_model).create(prog='dot', format='svg'))
  • 1
  • 2
  • 3

happy_model.png

相關文章
相關標籤/搜索