什麼是殘差網絡(ResNet)?

一、殘差git

殘差在數理統計中是指實際觀察值與估計值(擬合值)之間的差。在集成學習中能夠經過基模型擬合殘差,使得集成的模型變得更精確;在深度學習中也有人利用layer去擬合殘差將深度神經網絡的性能提升變強。這裏筆者選了Gradient BoostingResnet兩個算法試圖讓你們更感性的認識到擬合殘差的做用機理。算法

二、Gradient Boosting網絡

Gradient Boosting模型大體能夠總結爲三部:性能

  1. 訓練一個基學習器Tree_1(這裏採用的是決策樹)去擬合datalabel
  2. 接着訓練一個基學習器Tree_2,輸入時data,輸出是label和上一個基學習器Tree_1的預測值的差值(殘差),這一步總結下來就是使用一個基學習器學習殘差
  3. 最後把全部的基學習器的結果相加,作最終決策。

下方代碼僅僅作了3步的殘差擬合,最後一步就是體現出集成學習的特徵,將多個基學習器組合成一個組合模型。學習

from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2)
tree_reg3.fit(X, y3)
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))

其實上方代碼就等價於調用sklearn中的GradientBoostingRegressor這個集成學習API,同時將基學習器的個數n_estimators設爲3spa

from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0)
gbrt.fit(X, y)

形象的理解Gradient Boosting,其的過程就像射箭屢次射向同一個箭靶,上一次射的偏右,下一箭就會盡可能偏左一點,就這樣慢慢調整射箭的位置,使得箭的位置和靶心的誤差變小,最終射到靶心。這也是boosting的集成方式會減少模型bias的緣由。.net

殘差網絡的做用:scala

1)爲何殘差學習的效果會如此的好?與其餘論文相比,深度殘差學習具備更深的網絡結構,此外,殘差學習也是網絡變深的緣由?爲何網絡深度如此的重要?設計

解:通常認爲神經網絡的每一層分別對應於提取不一樣層次的特徵信息,有低層,中層和高層,而網絡越深的時候,提取到的不一樣層次的信息會越多,而不一樣層次間的層次信息的組合也會越多。rest

2)爲何在殘差以前網絡的深度最深的也只是GoogleNet 22 而殘差卻能夠達到152層,甚至1000層?

解:深度學習對於網絡深度遇到的主要問題是梯度消失和梯度爆炸,傳統對應的解決方案則是數據的初始化(normlized initializatiton)和(batch normlization)正則化,可是這樣雖然解決了梯度的問題,深度加深了,卻帶來了另外的問題,就是網絡性能的退化問題,深度加深了,錯誤率卻上升了,而殘差用來設計解決退化問題,其同時也解決了梯度問題,更使得網絡的性能也提高了。

殘差網絡的基本結構:

將輸入疊加到下層的輸出上。對於一個堆積層結構(幾層堆積而成)當輸入爲x時其學習到的特徵記爲H(x),如今咱們但願其能夠學習到殘差F(x)=H(x)-x,這樣其實原始的學習特徵是F(x)+x 。當殘差爲0時,此時堆積層僅僅作了恆等映射,至少網絡性能不會降低,實際上殘差不會爲0,這也會使得堆積層在輸入特徵基礎上學習到新的特徵,從而擁有更好的性能。

對着下方代碼咱們能夠更清晰的看到residual block的具體操做:

  1. 輸入x
  2. x經過三層convolutiaon層以後獲得輸出m
  3. 將原始輸入x和輸出m加和。

就獲得了residual block的總輸出,整個過程就是經過三層convolutiaon層去擬合residual block輸出與輸出的殘差m

from keras.layers import Conv2D
from keras.layers import  add
def residual_block(x, f=32, r=4):
    """
    residual block
    :param x: the input tensor
    :param f: the filter numbers
    :param r:
    :return:
    """
    m = conv2d(x, f // r, k=1)
    m = conv2d(m, f // r, k=3)
    m = conv2d(m, f, k=1)
    return add([x, m])

resnet中殘差的思想就是去掉相同的主體部分,從而突出微小的變化,讓模型集中注意去學習一些這些微小的變化部分。這和咱們以前討論的Gradient Boosting中使用一個基學習器去學習殘差思想幾乎同樣。

 

三、slim庫

要學習殘差網絡,先學習slim庫的用法。

首先讓咱們看看tensorflow怎麼實現一個層,例如卷積層:

input = ...
with tf.name_scope('conv1_1') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
                                           stddev=1e-1), name='weights')
  conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
  biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
                       trainable=True, name='biases')
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name=scope)

 

而後slim的實現:

input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')

但這個不是重要的,由於tenorflow目前也有大部分層的簡單實現,這裏比較吸引人的是slim中的repeatstack操做:

假設定義三個相同的卷積層:

net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

slim中的repeat操做能夠減小代碼量:

net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

stack是處理卷積核或者輸出不同的狀況:

假設定義三層FC

x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')

使用stack操做:

slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

同理卷積層也同樣:

# 普通方法:
x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')
x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')
x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')
x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')
 # 簡便方法:
slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')

採用如上方法,定義一個VGG也就十幾行代碼的事了。

def vgg16(inputs):
  with slim.arg_scope([slim.conv2d, slim.fully_connected],
                      activation_fn=tf.nn.relu,
                      weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
                      weights_regularizer=slim.l2_regularizer(0.0005)):
    net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
    net = slim.max_pool2d(net, [2, 2], scope='pool1')
    net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
    net = slim.max_pool2d(net, [2, 2], scope='pool2')
    net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
    net = slim.max_pool2d(net, [2, 2], scope='pool3')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
    net = slim.max_pool2d(net, [2, 2], scope='pool4')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
    net = slim.max_pool2d(net, [2, 2], scope='pool5')
    net = slim.fully_connected(net, 4096, scope='fc6')
    net = slim.dropout(net, 0.5, scope='dropout6')
    net = slim.fully_connected(net, 4096, scope='fc7')
    net = slim.dropout(net, 0.5, scope='dropout7')
    net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
  return net

這個沒什麼好說的,說一下直接拿經典網絡來訓練吧。

import tensorflow as tf
vgg = tf.contrib.slim.nets.vgg 
# Load the images and labels.
images, labels = ... 
# Create the model.
predictions, _ = vgg.vgg_16(images) 
# Define the loss functions and get the total loss.
loss = slim.losses.softmax_cross_entropy(predictions, labels)

 【注】slim的卷積層默認是SAME模式的padding,這也就意味着卷積以後和卷積以前的大小相同。殘差網絡剛好須要這個特性。

 

四、殘差網絡模型

編寫殘差網絡單元

import tensorflow as tf
import tensorflow.contrib.slim as slim
 
def resnet_block(inputs,ksize,num_outputs,i):
    with tf.variable_scope('res_unit'+str(i)) as scope:
        part1 = slim.batch_norm(inputs,activation_fn=None)
        part2 = tf.nn.elu(part1)
        part3 = slim.conv2d(part2,num_outputs,[ksize,ksize],activation_fn=None)
        part4 = slim.batch_norm(part3,activation_fn=None)
        part5 = tf.nn.elu(part4)
        part6 = slim.conv2d(part5,num_outputs,[ksize,ksize],activation_fn=None)
        output = part6 + inputs
        return output
 
def resnet(X_input,ksize,num_outputs,num_classes,num_blocks):
          layer1 = slim.conv2d(X_input,num_outputs,[ksize,ksize],normalizer_fn=slim.batch_norm,scope='conv_0') 
          for i in range(num_blocks):
                    layer1 = resnet_block(layer1,ksize,num_outputs,i+1) 
          top = slim.conv2d(layer1,num_classes,[ksize,ksize],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')
          top = tf.reduce_mean(top,[1,2])
          output = slim.layers.softmax(slim.layers.flatten(top))
          return output

 

五、訓練網絡

import tensorflow as tf
import tensorflow.contrib.slim as slim
from scrips import config
from scrips import resUnit
from scrips import read_tfrecord
from scrips import convert2onehot
import numpy as np
 
log_dir = config.log_dir
model_dir = config.model_dir
IMG_W = config.IMG_W
IMG_H = config.IMG_H
IMG_CHANNELS = config.IMG_CHANNELS
NUM_CLASSES = config.NUM_CLASSES
BATCH_SIZE = config.BATCH_SIZE
 
tf.reset_default_graph()
 
X_input = tf.placeholder(shape=[None,IMG_W,IMG_H,IMG_CHANNELS],dtype=tf.float32,name='input')
y_label = tf.placeholder(shape=[None,NUM_CLASSES],dtype=tf.int32)
 
#*****************************************************************************************************
output = resUnit.resnet(X_input,3,64,NUM_CLASSES,5)
#*****************************************************************************************************
 
#loss and accuracy
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_label, logits=output))
train_step = tf.train.AdamOptimizer(config.lr).minimize(loss)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y_label, 1), tf.argmax(output, 1)), tf.float32))
 
#tensor board
tf.summary.scalar('loss',loss)
tf.summary.scalar('accuracy',accuracy)
 
#從tfrecord中讀取數據及對應的標籤
image, label = read_tfrecord.read_and_decode(config.tfrecord_dir,IMG_W,IMG_H,IMG_CHANNELS)
image_batches, label_batches = tf.train.shuffle_batch([image, label], batch_size=BATCH_SIZE, capacity=2000,min_after_dequeue=1000)
 
#訓練網絡
init = tf.global_variables_initializer()
saver = tf.train.Saver()
 
with tf.Session() as sess: 
    ckpt = tf.train.get_checkpoint_state(model_dir)#從中間模型加載權重
    if ckpt and ckpt.model_checkpoint_path:
        print('Restore model from ',end='')
        print(ckpt.model_checkpoint_path) 
        saver.restore(sess,ckpt.model_checkpoint_path) 
        if (ckpt.model_checkpoint_path.split('-')[-1]).isdigit():
            global_step = int(ckpt.model_checkpoint_path.split('-')[-1])
            print('Restore step at #',end='')
            print(global_step)
        else:
            global_step = 0
    else:
        global_step = 0
        sess.run(init)
 
    tensor_board_writer = tf.summary.FileWriter(log_dir,tf.get_default_graph())
    merged = tf.summary.merge_all() 
    #sess.graph.finalize() 
    threads = tf.train.start_queue_runners(sess=sess) 
    while True:
        try:
            global_step += 1
            X_train, y_train = sess.run([image_batches, label_batches])
            y_train_onehot = convert2onehot.one_hot(y_train,NUM_CLASSES)
            feed_dict = {X_input: X_train, y_label: y_train_onehot}
            [_, temp_loss, temp_accuracy,summary] = sess.run([train_step, loss, accuracy,merged], feed_dict=feed_dict) 
            tensor_board_writer.add_summary(summary,global_step) 
            if global_step % config.display == 0:
                print('step at #{},'.format(global_step), end=' ')
                print('train loss: {:.5f}'.format(temp_loss), end=' ')
                print('train accuracy: {:.2f}%'.format(temp_accuracy * 100))
            if global_step % config.snapshot== 0:
                saver.save(sess,model_dir+'/model.ckpt',global_step)
        except:
            tensor_board_writer.close()
            break;
相關文章
相關標籤/搜索