經典卷積神經網絡——LeNet-5

LeNet-5python


   LeNet於90年代被提出,鑑於當時的計算能力和內存容量,直到2010年才能真正的實施這樣的大規模計算。LeNet-5是LeCun於1998年提出的深度神經網絡結構,總共包含7層網絡(除輸入層外):2層卷積層、2層池化層、3層全鏈接層(在原論文中第一個全鏈接層被稱爲卷積層)。網絡結構圖[2]以下圖所示:git

 

  輸入數據是公認的MNIST[1]手寫數字數據集,尺寸爲32*32*1的灰度圖,在論文中卷積層記爲Cx()、池化層記爲Sx(降採樣)、全鏈接層記爲Fx,x表示層級,接下來對7層網絡結構分析以下:網絡

  • C1(32x32x1 => 28x28x6)

  採用5x5x1,6 個卷積核,不使用0填充,輸出尺寸是(N-P+1)=(32-5+1)=28,維度是28x28x6:app

    • 卷積可訓練參數:每一個卷積核大小是5*5,有6個channel,因此是(5*5+1)*6=156
    • 鏈接數:計算鏈接數的時候從輸出的特徵圖(feature map)中具體特徵點入手,每一個特徵點是有輸入圖和卷積核相乘得來的,計算量爲1*5*5+1,共有28*28個特徵點,因此鏈接數爲(28*28*6)*(5*5+1)=122304
  • S2(28x28x6 => 14x14x6)

  採用2x2x6,strides=2的平均池化層(目前最流行的是max pool,因此後面代碼中替換了),輸出尺寸是floor((N-P)/strides+1)=floor((28-2)/2+1)=14,維度是14x14x6:dom

    • 卷積可訓練參數(每一個池化層有兩個訓練參數,有一個係數和一個偏置):(1+1)*6=12
    • 鏈接數:(14*14*6)*(2*2+1)=5880
  • C3(14x14x6 => 10x10x16)

  採用5x5x6,16 個卷積核,不使用0填充,輸出尺寸是(14-5+1)=10,維度是10x10x16:ide

    • 卷積可訓練參數:C3採用了不徹底鏈接的方式,第0個feature map與上一層的第0,1,2個feature map鏈接,其餘的具體鏈接方式參照上圖[2],有(5x5x3+1)x6 + (5x5x4 + 1) x 3 + (5x5x4 +1)x6 + (5x5x6+1)x1 = 1516個訓練參數
    • 鏈接數:1516x10x10=151600個
  • S4(10x10x16 => 5x5x16)

  採用與S2相同的結構。函數

    • 卷積可訓練參數:(1+1)*16=32
    • 鏈接數:(5*5*16)*(2*2+1)=2000
  • C5/F5(5*5*15 => 120)

  由於輸入尺寸已是5x5,這裏採用5x5卷積核,因此跟全鏈接沒有區別,輸出爲120個logits。測試

    • 卷積可訓練參數/鏈接數:(5*5*16+1)*120=48120
  • F6(120 => 84)

  輸入尺寸爲120,輸出爲84網站

    • 參數/鏈接數:(120+1)*84=10164
  • F7(84 => 10)

  原文中經過RBF方法來計算歐氏距離,細節詳見論文,本文采用了softmax分類。spa

 CNN三大思想:

  CNN網絡經過稀疏鏈接、權值共享、下采樣等方式,來解決全鏈接神經網絡中參數爆炸和過分擬合問題。

  • 感覺野(receptive field)/稀疏鏈接

  特徵圖中的一個特徵點在上一層特徵圖中的映射區域叫作感覺野。特徵圖中的某個特徵點的值只與上層特徵圖中的感覺野相關,與剩餘的其餘任何特徵點都無關。

  • 權值共享

  相比於全鏈接,可有效減小由於層級增長致使的參數量爆炸問題。

  • 下采樣

  卷積層其實就是特徵提取器,在最開始的層級獲取的是特徵細節,好比橫向或者豎向的特徵,隨着層級的加深和特徵圖的增長,下采樣能有效較小輸入特徵圖的分辨率,能避免由於精確的細節特徵對訓練結果的影響,從而使得CNN善於捕捉平移不變,具備更強的魯棒性。

 

讀取MNIST數據集


  MNIST數據集共有60000個訓練數據,10000個驗證數據,可在[1]網站下載,是以大端存儲的二進制格式文件,訓練圖像數據格式:

TRAINING SET LABEL FILE (train-labels-idx1-ubyte):

[offset] [type]          [value]          [description] 
0000     32 bit integer  0x00000801(2049) magic number (MSB first) 
0004     32 bit integer  60000            number of items 
0008     unsigned byte   ??               label 
0009     unsigned byte   ??               label 
........ 
xxxx     unsigned byte   ??               label 

  訓練標籤數據格式:

[offset] [type]          [value]          [description] 
0000     32 bit integer  0x00000801(2049) magic number (MSB first) 
0004     32 bit integer  60000            number of items 
0008     unsigned byte   ??               label 
0009     unsigned byte   ??               label 
........ 
xxxx     unsigned byte   ??               label

  驗證圖像數據格式和標籤格式與訓練的一致,這裏運用python struct庫來讀取文件並返回數據尺寸和numpy array格式數據。具體python代碼以下:

#readMNIST.py
#
encoding:utf-8 import numpy as np import struct import matplotlib.pyplot as plt from skimage import transform #讀取mnist images數據集 def read_mnist_image_data(file_name): file = open(file_name,'rb') buff = file.read()#一次性讀取所有數據 header_format = '>IIII' #偏移量 buf_pointer = 0 #圖片大小尺寸信息 magic_number=0 number_of_images = 0 number_of_rows = 0 number_of_columns = 0 #讀取images數據 magic_number,number_of_images ,number_of_rows ,number_of_columns = struct.unpack_from(header_format,buff,buf_pointer) print("magic_number:%d,number_of_images:%d,number_of_rows:%d,number_of_columns:%d"%(magic_number,number_of_images,number_of_rows,number_of_columns)) buf_pointer += struct.calcsize(header_format) images = []#圖片數據 image_byte_size = number_of_rows * number_of_columns date_format = ">%dB"%image_byte_size for idx in range(number_of_images): #for idx in range(1): image = struct.unpack_from(date_format ,buff,buf_pointer) #print(image) #偏移 buf_pointer += struct.calcsize(date_format ) image = np.array(image) #print(image.shape) #print(image) #image = transform.resize(image,(number_of_rows,number_of_columns)) image = image.reshape(number_of_rows, number_of_columns,1) #print(image.shape) #print(image) #顯示圖片 #fig = plt.figure() #plt.imshow(image,cmap='binary') #plt.show() images.append(image) file.close() #返回圖片數據 return number_of_images ,number_of_rows ,number_of_columns,np.asarray(images,np.ubyte) #讀取mnist label數據集 def read_mnist_label_data(file_name): file = open(file_name,'rb') buff = file.read()#一次性讀取所有數據 header_format = '' #偏移量 buf_pointer = 0 #圖片大小尺寸信息 magic_number=0 number_of_labels = 0 #讀取label數據 header_format = '>II' magic_number,number_of_labels = struct.unpack_from(header_format,buff,buf_pointer) print("magic_number:%d,number_of_labels:%d"%(magic_number,number_of_labels)) buf_pointer += struct.calcsize(header_format) labels = []#label數據 date_format = ">B" for idx in range(number_of_labels): #for idx in range(10): label = struct.unpack_from(date_format ,buff,buf_pointer) #偏移 buf_pointer += struct.calcsize(date_format ) #輸出label(由於label輸出是tuple) #print(label[0]) labels.append(label[0]) file.close() return number_of_labels ,np.asarray(labels,np.ubyte)

 

基於tensorflow 1.0.0實現LeNet-5


   config.py,主要用於配置參數,如train和test的比例(ratio),是訓練模型仍是直接使用已經訓練好的模型(isTrain)。 

#config.py
#
coding:utf-8 #tensorflow 1.0.0 #是否訓練 isTrain = False #train vs. test ratio ratio=0.8 #訓練集數據 train_images_file = "./mnist-database/train-images-idx3-ubyte" train_labels_file = "./mnist-database/train-labels-idx1-ubyte" #測試集數據 valid_images_file = "./mnist-database/t10k-images-idx3-ubyte" valid_labels_file = "./mnist-database/t10k-labels-idx1-ubyte" #checkpoint checkpoint_dir = './trained_model/' model_name = 'model-mnist.ckpt'

   運用tensorflow實現的LeNet-5源碼以下,與原論文有區別的主要在:

  (1)未使用平均池化層,而是使用最流行的最大池化層;

  (2)S2和C3採用CNN的徹底鏈接方式;

  (3)激活函數使用最流行的relu而不是sigmoid,能夠加快收斂速度;

  (4)輸出未採用論文中分類函數,使用的是稀疏softmax交叉損失熵函數tf.losses.sparse_softmax_cross_entropy,當分類問題只有一個正確答案時,可使用這個函數來加速交叉熵的計算,這個函數的第一個參數是不包含softmax層的前向傳播結果,第二個是訓練數據的正確答案。

#mnist_Lenet-5.py
#
encoding:utf-8 import tensorflow as tf import numpy as np from readMNIST import * from config import * #mnist數據集尺寸 w=28 h=28 c=1 #讀取訓練數據images和labels #讀取訓練集圖片數據 train_images_nums,train_images_rows,train_images_cols,train_images = read_mnist_image_data(train_images_file) #print(train_images.shape) #print(train_images) #讀取訓練集數據label數據 train_labels_nums,train_labels = read_mnist_label_data(train_labels_file) #print(train_labels.shape) #print(train_labels) #讀取驗證集圖片數據 #valid_images_nums,valid_images_rows,valid_images_cols,valid_images = read_mnist_image_data(valid_images_file) #print(valid_images) #讀取驗證集數據label數據 #valid_labels_nums,valid_labels = read_mnist_label_data(valid_labels_file) #print(valid_labels) #打亂順序 #訓練數據 data = train_images label = train_labels nums = train_images_nums #驗證數據 #data = valid_images #label = valid_labels #nums = valid_label_nums if isTrain: arr=np.arange(nums) #print(arr) np.random.shuffle(arr) print(arr[0]) data=data[arr] label=label[arr] x_train = [] y_train = [] x_val = [] y_val = [] #將全部數據分爲訓練集和驗證集 #按照經驗,8成爲訓練集,2成爲驗證集 if isTrain: s=np.int(nums*ratio) #訓練集 x_train=data[:s] y_train=label[:s] #驗證集 x_val=data[s:] y_val=label[s:] else: x_train = data #-----------------構建網絡---------------------- #佔位符 x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x') y_=tf.placeholder(tf.int32,shape=[None,],name='y_') #第一層:卷積層(28-->28) conv1=tf.layers.conv2d( inputs=x, filters=6, kernel_size=[5, 5], padding="same", activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) #第二層:最大池化層(28-->14) pool1=tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) #第三層:卷積層(14->10) conv2=tf.layers.conv2d( inputs=pool1, filters=16, kernel_size=[5, 5], padding="valid", activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) #第四層:最大池化層(10-->5) pool2=tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) #在輸入全鏈接層前,先拍成一維向量 re1 = tf.reshape(pool2, [-1, 5 * 5 * 16]) #全鏈接層,只有全鏈接層纔會進行L1和L2正則化 dense1 = tf.layers.dense(inputs=re1, units=120, activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.01), kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003)) dense2= tf.layers.dense(inputs=dense1, units=84, activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.01), kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003)) logits= tf.layers.dense(inputs=dense2, units=10, activation=None, kernel_initializer=tf.truncated_normal_initializer(stddev=0.01), kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003)) #---------------------------網絡結束--------------------------- loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits) train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_) acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) #定義一個函數,按批次取數據 def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batch_size + 1, batch_size): if shuffle: excerpt = indices[start_idx:start_idx + batch_size] else: excerpt = slice(start_idx, start_idx + batch_size) yield inputs[excerpt], targets[excerpt] #訓練和測試數據,可將n_epoch設置更大一些 n_epoch=20 batch_size=64 sess=tf.InteractiveSession() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) if isTrain: for epoch in range(n_epoch): #start_time = time.time() print("epoch %d"%epoch) #training train_loss, train_acc, n_batch = 0, 0, 0 for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size, shuffle=True): #print(x_train_a) _,err,ac=sess.run([train_op,loss,acc], feed_dict={x: x_train_a, y_: y_train_a}) train_loss += err; train_acc += ac; n_batch += 1 print(" train loss: %f" % (train_loss/ n_batch)) print(" train acc: %f" % (train_acc/ n_batch)) print("\n") if epoch == n_epoch - 1: print("Saving trained mode as ckpt format!") save_path = saver.save(sess,checkpoint_dir+model_name) #validation val_loss, val_acc, n_batch = 0, 0, 0 for x_val_a, y_val_a in minibatches(x_val, y_val, batch_size, shuffle=False): err, ac = sess.run([loss,acc], feed_dict={x: x_val_a, y_: y_val_a}) val_loss += err; val_acc += ac; n_batch += 1 print(" validation loss: %f" % (val_loss/ n_batch)) print(" validation acc: %f" % (val_acc/ n_batch)) print("\n") else: class_begin_idx = 0 class_end_idx = 0 saver.restore(sess, checkpoint_dir+model_name) #print(label[0:20]) result = sess.run(logits,feed_dict={x:data}) class_index = np.argmax(result,axis=1) #print(result) #print(train_labels) #class_result = [] correct_nums = 0 cnt_idx = 0 #print("The number is:\n") #predict_result = [] for idx in class_index: if idx == label[cnt_idx]:#softmax的標籤爲0開始的整數,因此這裏數字0-9恰好對應了softmax的0-9層輸出 correct_nums += 1 #predict_result.append(idx) #將錯誤圖片顯示出來 #else: #fig = plt.figure() #plt.imshow(data[class_begin_idx+error_idx ],cmap='binary') #plt.show() cnt_idx += 1 correct_pro = correct_nums / (len(result)*1.0) #print(predict_result[0:20]) print("Input number of mnist set is %d, correct num:%d, correct proportion:%f"%(len(result), correct_nums,correct_pro)) print("\n") sess.close()

   最終訓練結果,在train準確率99.54%,valid準確率98.53%。

 

Refrence


[1] THE MNIST DATABASE:http://yann.lecun.com/exdb/mnist/ 

[2] [LeCun et al., 1998]: Gradient-Based Learning Applied to Document Recognition

[3] Md Zahangir Alom et al.,2018: The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches

相關文章
相關標籤/搜索