2.3 卷積神經網絡-卷積神經網絡實戰

4.1.3 卷積神經網絡實戰

  • 使用神經網路進行圖像分類

    使用TensorFlow封裝的函數簡化定義模型的過程python

    # (3072, 10)
    w = tf.get_variable('w', [x.get_shape()[-1], 10],
                        initializer=tf.random_normal_initializer(0, 1))
    # (10, )
    b = tf.get_variable('b', [10],
                       initializer=tf.constant_initializer(0.0))
    # [None, 3072] * [3072, 1] = [None, 1]
    y_ = tf.matmul(x, w) + b
    複製代碼

    上面這段代碼能夠替換爲api

    y_ = tf.layers.dense(hidden, 10)
    複製代碼

    dense函數時一個全鏈接的api,咱們能夠用他構建更多的層次bash

    下面的代碼構建了一個三個隱含層的神經網絡,前兩層有100個神經元,第三層有50個,激活函數都是relu網絡

    # activation選擇激活函數
    hidden1 = tf.layers.dense(x, 100, activation=tf.nn.relu)
    hidden2 = tf.layers.dense(hidden1, 100, activation=tf.nn.relu)
    hidden3 = tf.layers.dense(hidden2, 50, activation=tf.nn.relu)
    
    y_ = tf.layers.dense(hidden3, 10)
    複製代碼

    修改完代碼後從新跑咱們的測試,模型準確率達到了百分之50dom

    [Train] Step: 500, loss: 2.14457, acc: 0.25000
    [Train] Step: 1000, loss: 1.38850, acc: 0.45000
    [Train] Step: 1500, loss: 1.48442, acc: 0.45000
    [Train] Step: 2000, loss: 1.30306, acc: 0.70000
    [Train] Step: 2500, loss: 1.81453, acc: 0.35000
    [Train] Step: 3000, loss: 1.25715, acc: 0.55000
    [Train] Step: 3500, loss: 1.24998, acc: 0.55000
    [Train] Step: 4000, loss: 1.52799, acc: 0.45000
    [Train] Step: 4500, loss: 1.40961, acc: 0.40000
    [Train] Step: 5000, loss: 1.29267, acc: 0.65000
    (10000, 3072)
    (10000,)
    [Test ] Step: 5000, acc: 0.46650
    [Train] Step: 5500, loss: 1.61286, acc: 0.20000
    [Train] Step: 6000, loss: 1.14901, acc: 0.45000
    [Train] Step: 6500, loss: 1.59980, acc: 0.60000
    [Train] Step: 7000, loss: 1.80693, acc: 0.40000
    [Train] Step: 7500, loss: 1.60266, acc: 0.45000
    [Train] Step: 8000, loss: 1.46613, acc: 0.65000
    [Train] Step: 8500, loss: 1.77019, acc: 0.45000
    [Train] Step: 9000, loss: 1.57591, acc: 0.50000
    [Train] Step: 9500, loss: 0.96180, acc: 0.75000
    [Train] Step: 10000, loss: 1.08688, acc: 0.70000
    (10000, 3072)
    (10000,)
    [Test ] Step: 10000, acc: 0.49750
    複製代碼
  • 使用卷積神經網絡進行圖像分類

    全鏈接有api,池化核卷積固然也有ide

    # conv1:神經元圖,feature map,輸出圖像
    conv1 = tf.layers.conv2d(x_image,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
                             activation = tf.nn.relu,
                             name = 'conv1'
                             )
    # 16*16
    pooling1 = tf.layers.max_pooling2d(conv1,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool1' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
                                      )
    複製代碼

    這樣就構建了一個卷積層一個池化層,咱們能夠再複製兩遍,修改輸入,這樣就有了三層卷積層三層池化層函數

    最後再加一層全鏈接測試

    conv2 = tf.layers.conv2d(pooling1,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
                             activation = tf.nn.relu,
                             name = 'conv2'
                             )
    # 8*8
    pooling2 = tf.layers.max_pooling2d(conv2,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool2' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
                                      )
    
    conv3 = tf.layers.conv2d(pooling2,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
                             activation = tf.nn.relu,
                             name = 'conv3'
                             )
    # 4*4*32
    pooling3 = tf.layers.max_pooling2d(conv3,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool3' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
                                      )
    
    # [None, 4*4*42] 將三通道的圖形轉換成矩陣
    flatten = tf.layers.flatten(pooling3)
    y_ = tf.layers.dense(flatten, 10)
    複製代碼

    這樣卷積神經網絡結構就搭建完了,能夠看到,咱們在完成數據準備和測試結果以後,使用TensorFlow的API構建網絡結構仍是很是簡單的。spa

    使用卷積神經網絡,10000次訓練進行測試,分類結果達到了百分之69code

    [Train] Step: 500, loss: 1.24817, acc: 0.60000
    [Train] Step: 1000, loss: 1.24423, acc: 0.50000
    [Train] Step: 1500, loss: 1.15608, acc: 0.55000
    [Train] Step: 2000, loss: 0.89077, acc: 0.85000
    [Train] Step: 2500, loss: 0.91770, acc: 0.60000
    [Train] Step: 3000, loss: 1.09620, acc: 0.55000
    [Train] Step: 3500, loss: 0.83352, acc: 0.70000
    [Train] Step: 4000, loss: 1.00452, acc: 0.60000
    [Train] Step: 4500, loss: 1.13865, acc: 0.60000
    [Train] Step: 5000, loss: 0.63163, acc: 0.85000
    (10000, 3072)
    (10000,)
    [Test ] Step: 5000, acc: 0.64850
    [Train] Step: 5500, loss: 1.29329, acc: 0.55000
    [Train] Step: 6000, loss: 1.14539, acc: 0.65000
    [Train] Step: 6500, loss: 0.48069, acc: 0.80000
    [Train] Step: 7000, loss: 1.02633, acc: 0.65000
    [Train] Step: 7500, loss: 0.93267, acc: 0.70000
    [Train] Step: 8000, loss: 0.97426, acc: 0.70000
    [Train] Step: 8500, loss: 0.97432, acc: 0.75000
    [Train] Step: 9000, loss: 0.84112, acc: 0.70000
    [Train] Step: 9500, loss: 0.79695, acc: 0.70000
    [Train] Step: 10000, loss: 0.64198, acc: 0.80000
    (10000, 3072)
    (10000,)
    [Test ] Step: 10000, acc: 0.69950
    複製代碼