VGGNET的思想就是加深神經網絡層次,多使用3*3的卷積核替換5*5的python
這裏咱們就不使用1*1的卷積核了git
咱們能夠在以前的卷積神經網絡基礎上覆用數據處理和測試的代碼bash
只修改卷積層部分網絡
# conv1:神經元圖,feature map,輸出圖像
conv1_1 = tf.layers.conv2d(x_image,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv1_1'
)
conv1_2 = tf.layers.conv2d(conv1_1,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv1_2'
)
# 16*16
pooling1 = tf.layers.max_pooling2d(conv1_2,
(2, 2), # kernal size
(2, 2), # stride
name = 'pool1' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
)
conv2_1 = tf.layers.conv2d(pooling1,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv2_1'
)
conv2_2 = tf.layers.conv2d(conv2_1,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv2_2'
)
# 8*8
pooling2 = tf.layers.max_pooling2d(conv2_2,
(2, 2), # kernal size
(2, 2), # stride
name = 'pool2' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
)
conv3_1 = tf.layers.conv2d(pooling2,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv3_1'
)
conv3_2 = tf.layers.conv2d(conv3_1,
32, # output channel number
(3,3), # kernal size
padding = 'same', # same 表明輸出圖像的大小沒有變化,valid 表明不作padding
activation = tf.nn.relu,
name = 'conv3_2'
)
# 4*4*32
pooling3 = tf.layers.max_pooling2d(conv3_2,
(2, 2), # kernal size
(2, 2), # stride
name = 'pool3' # name爲了給這一層作一個命名,這樣會讓圖打印出來的時候會是一個有意義的圖
)
複製代碼
訓練10000次 能夠達到百分之70的準確率app
[Train] Step: 500, loss: 1.92473, acc: 0.45000
[Train] Step: 1000, loss: 1.49288, acc: 0.35000
[Train] Step: 1500, loss: 1.30839, acc: 0.55000
[Train] Step: 2000, loss: 1.41633, acc: 0.40000
[Train] Step: 2500, loss: 1.10951, acc: 0.60000
[Train] Step: 3000, loss: 1.15743, acc: 0.65000
[Train] Step: 3500, loss: 0.93834, acc: 0.70000
[Train] Step: 4000, loss: 0.76699, acc: 0.80000
[Train] Step: 4500, loss: 0.71109, acc: 0.70000
[Train] Step: 5000, loss: 0.75763, acc: 0.75000
(10000, 3072)
(10000,)
[Test ] Step: 5000, acc: 0.67500
[Train] Step: 5500, loss: 0.98661, acc: 0.65000
[Train] Step: 6000, loss: 1.43098, acc: 0.50000
[Train] Step: 6500, loss: 0.86575, acc: 0.70000
[Train] Step: 7000, loss: 0.80474, acc: 0.65000
[Train] Step: 7500, loss: 0.60132, acc: 0.85000
[Train] Step: 8000, loss: 0.66683, acc: 0.80000
[Train] Step: 8500, loss: 0.56874, acc: 0.85000
[Train] Step: 9000, loss: 0.68185, acc: 0.70000
[Train] Step: 9500, loss: 0.83302, acc: 0.70000
[Train] Step: 10000, loss: 0.87228, acc: 0.70000
(10000, 3072)
(10000,)
[Test ] Step: 10000, acc: 0.72700
複製代碼
先來回顧一下RESNET的網絡結構ide
RESNET是先通過了一個卷積層,又通過了一個池化層,而後再通過若干個殘差鏈接塊測試
這裏每通過一個殘差鏈接塊之後,可能會通過一個降採樣的過程優化
所謂降採樣就是以前的maxpooling或者卷積層的步長等於2ui
在上面的ResNet中,通過了四次降採樣的過程,可是因爲咱們的實戰使用的圖片是32*32的自己就比較小,因此不會通過太多的降採樣,也不會首先通過maxpooling層spa
在降採樣的過程當中可能會出現的一個問題是:殘差有兩部分組成,一部分是卷積操做,一部分是恆等變換,若是卷及操做降採樣了,那麼會致使兩部分的維度不同,這時候的矩陣加法會出問題。因此這個時候須要額外進行一個操做,就是若是卷積作了降採樣,那麼恆等變化也要作一次降採樣,這個操做使用maxpooling來作。
先定義殘差塊的實現方法
""" x是輸入數據,output_channel 是輸出通道數 爲了不降採樣帶來的數據損失,咱們會在降採樣的時候講output_channel翻倍 因此這裏若是output_channel是input_channel的二倍,則說明須要降採樣 """
def residual_block(x, output_channel):
"""residual connection implementation"""
input_channel = x.get_shape().as_list()[-1]
if input_channel * 2 == output_channel:
increase_dim = True
strides = (2, 2)
elif input_channel == output_channel:
increase_dim = False
strides = (1, 1)
else:
raise Exception("input channel can't match output channel")
conv1 = tf.layers.conv2d(x,
output_channel,
(3,3),
strides = strides,
padding = 'same',
activation = tf.nn.relu,
name = 'conv1')
conv2 = tf.layers.conv2d(conv1,
output_channel,
(3,3),
strides = (1,1),
padding = 'same',
activation = tf.nn.relu,
name = 'conv2')
# 處理另外一個分支(恆等變換)
if increase_dim:
# 須要降採樣
# [None,image_width,image_height,channel] -> [,,,channel*2]
pooled_x = tf.layers.average_pooling2d(x,
(2,2), # pooling 核
(2,2), # strides strides = pooling 不重疊
padding = 'valid' # 這裏圖像大小是32*32,都能除盡,padding是什麼沒有關係
)
# average_pooling2d使得圖的大小變化了,可是output_channel仍是不匹配,下面修改output_channel
padded_x = tf.pad(pooled_x,
[[0,0],
[0,0],
[0,0],
[input_channel // 2,input_channel //2]])
else:
padded_x = x
output_x = conv2 + padded_x
return output_x
複製代碼
而後定義殘差網絡
先使用一個卷積層,而後循環建立殘差塊,最後跟一個全局的池化,而後是全鏈接到輸出
全局的池化和普通的池化同樣,只不過他的size和圖像的width,height同樣大,這樣一個圖像的輸出就是一個數
def res_net(x,
num_residual_blocks,
num_filter_base,
class_num):
"""residual network implementation"""
""" Args: - x: 輸入數據 - num_residual_blocks: 殘差連接塊數 eg: [3,4,6,3] - num_filter_base: 最初的通道數目 - class_num: 類別數目 """
# 須要作多少次降採樣
num_subsampling = len(num_residual_blocks)
layers = []
# [None,image_width,image_height,channel] -> [image_width,image_height,channel]
# kernal size:image_width,image_height
input_size = x.get_shape().as_list()[1:]
with tf.variable_scope('conv0'):
conv0 = tf.layers.conv2d(x,
num_filter_base,
(3,3),
strides = (1,1),
activation = tf.nn.relu,
padding = 'same',
name = 'conv0')
layers.append(conv0)
# eg: num_subsampling = 4 ,sample_id = [1,2,3,4]
for sample_id in range(num_subsampling):
for i in range(num_residual_blocks[sample_id]):
with tf.variable_scope("conv%d_%d" % (sample_id, i)):
conv = residual_block(
layers[-1],
num_filter_base * (2 ** sample_id)) # 每次翻倍
layers.append(conv)
multiplier = 2 ** (num_subsampling - 1)
assert layers[-1].get_shape().as_list()[1:] \
== [input_size[0] / multiplier,
input_size[1] / multiplier,
num_filter_base * multiplier]
with tf.variable_scope('fc'):
# layers[-1].shape : [None, width, height, channel]
global_pool = tf.reduce_mean(layers[-1], [1, 2]) # pooling
logits = tf.layers.dense(global_pool, class_num) # 全鏈接
layers.append(logits)
return layers[-1]
複製代碼
而後使用殘差網絡
x = tf.placeholder(tf.float32, [None, 3072])
y = tf.placeholder(tf.int64, [None])
# 將向量變成具備三通道的圖片的格式
x_image = tf.reshape(x, [-1,3,32,32])
# 32*32
x_image = tf.transpose(x_image, perm = [0, 2, 3, 1])
y_ = res_net(x_image, [2,3,2], 32, 10)
# 交叉熵
loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
# y_-> softmax
# y -> one_hot
# loss = ylogy_
# bool
predict = tf.argmax(y_, 1)
# [1,0,1,1,1,0,0,0]
correct_prediction = tf.equal(predict, y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))
with tf.name_scope('train_op'):
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
複製代碼
這裏訓練的結構過7000次百分之67.之因此比VGG低,是由於不少優化沒有用。優化後的殘差網絡在cifar10上能夠達到94%的準確率