圖像大小與參數個數:php
前面幾章都是針對小圖像塊處理的,這一章則是針對大圖像進行處理的。二者在這的區別仍是很明顯的,小圖像(如8*8,MINIST的28*28)能夠採用全鏈接的方式(即輸入層和隱含層直接相連)。可是大圖像,這個將會變得很耗時:好比96*96的圖像,若採用全鏈接方式,須要96*96個輸入單元,而後若是要訓練100個特徵,只這一層就須要96*96*100個參數(W,b),訓練時間將是前面的幾百或者上萬倍。因此這裏用到了部分聯通網絡。對於圖像來講,每一個隱含單元僅僅鏈接輸入圖像的一小片相鄰區域。網絡
這樣就引出了一個卷積的方法:app
convolution:less
天然圖像有其固有特性,也就是說,圖像的一部分的統計特性與其餘部分是同樣的。這也意味着咱們在這一部分學習的特徵也能用在另外一部分上,因此對於這個圖像上的全部位置,咱們都能使用一樣的學習特徵。dom
對於圖像,當從一個大尺寸圖像中隨機選取一小塊,好比說8x8做爲樣本,而且從這個小塊樣本中學習到了一些特徵,這時咱們能夠把從這個8x8樣本中學習到的特徵做爲探測器,應用到這個圖像的任意地方中去。特別是,咱們能夠用從8x8樣本中所學習到的特徵跟本來的大尺寸圖像做卷積,從而對這個大尺寸圖像上的任一位置得到一個不一樣特徵的激活值。ide
講義中舉得具體例子,仍是看例子容易理解:學習
假設你已經從一個96x96的圖像中學習到了它的一個8x8的樣本所具備的特徵,假設這是由有100個隱含單元的自編碼完成的。爲了獲得卷積特徵,須要對96x96的圖像的每一個8x8的小塊圖像區域都進行卷積運算。也就是說,抽取8x8的小塊區域,而且從起始座標開始依次標記爲(1,1),(1,2),...,一直到(89,89),而後對抽取的區域逐個運行訓練過的稀疏自編碼來獲得特徵的激活值。在這個例子裏,顯然能夠獲得100個集合,每一個集合含有89x89個卷積特徵。講義中那個gif圖更形象,這裏不知道怎麼添加進來...this
最後,總結下convolution的處理過程:編碼
假設給定了r * c的大尺寸圖像,將其定義爲xlarge。首先經過從大尺寸圖像中抽取的a * b的小尺寸圖像樣本xsmall訓練稀疏自編碼,獲得了k個特徵(k爲隱含層神經元數量),而後對於xlarge中的每一個a*b大小的塊,求激活值fs,而後對這些fs進行卷積。這樣獲得(r-a+1)*(c-b+1)*k個卷積後的特徵矩陣。spa
pooling:
在經過卷積得到了特徵(features)以後,下一步咱們但願利用這些特徵去作分類。理論上講,人們能夠把全部解析出來的特徵關聯到一個分類器,例如softmax分類器,但計算量很是大。例如:對於一個96X96像素的圖像,假設咱們已經經過8X8個輸入學習獲得了400個特徵。而每個卷積都會獲得一個(96 − 8 + 1) * (96 − 8 + 1) = 7921的結果集,因爲已經獲得了400個特徵,因此對於每一個樣例(example)結果集的大小就將達到892 * 400 = 3,168,400 個特徵。這樣學習一個擁有超過3百萬特徵的輸入的分類器是至關不明智的,而且極易出現過分擬合(over-fitting).
因此就有了pooling這個方法,翻譯做「池化」?感受pooling這個英語單詞仍是挺形象的,翻譯「做池」化就沒那麼形象了。其實也就是把特徵圖像區域的一部分求個均值或者最大值,用來表明這部分區域。若是是求均值就是mean pooling,求最大值就是max pooling。講義中那個gif圖也很形象,只是不知道這裏怎麼放gif圖....
至於pooling爲何能夠這樣作,是由於:咱們之因此決定使用卷積後的特徵是由於圖像具備一種「靜態性」的屬性,這也就意味着在一個圖像區域有用的特徵極有可能在另外一個區域一樣適用。所以,爲了描述大的圖像,一個很天然的想法就是對不一樣位置的特徵進行聚合統計。這個均值或者最大值就是一種聚合統計的方法。
另外,若是人們選擇圖像中的連續範圍做爲池化區域,而且只是池化相同(重複)的隱藏單元產生的特徵,那麼,這些池化單元就具備平移不變性(translation invariant)。這就意味着即便圖像經歷了一個小的平移以後,依然會產生相同的(池化的)特徵(這裏有個小小的疑問,既然這樣,是否是隻能保證在池化大小的這塊區域內具備平移不變性?)。在不少任務中(例如物體檢測、聲音識別),咱們都更但願獲得具備平移不變性的特徵,由於即便圖像通過了平移,樣例(圖像)的標記仍然保持不變。例如,若是你處理一個MNIST數據集的數字,把它向左側或右側平移,那麼不論最終的位置在哪裏,你都會指望你的分類器仍然可以精確地將其分類爲相同的數字。
練習:
下面是講義中的練習。用到了上一章的練習的結構(即在convolution過程當中的第一步,用稀疏自編碼對xsmall求k個特徵)。
如下是主要程序:
主程序cnnExercise.m
%% CS294A/CS294W Convolutional Neural Networks Exercise % Instructions % ------------ % % This file contains code that helps you get started on the % convolutional neural networks exercise. In this exercise, you will only % need to modify cnnConvolve.m and cnnPool.m. You will not need to modify % this file. %%====================================================================== %% STEP 0: Initialization % Here we initialize some parameters used for the exercise. imageDim = 64; % image dimension imageChannels = 3; % number of channels (rgb, so 3) patchDim = 8; % patch dimension numPatches = 50000; % number of patches visibleSize = patchDim * patchDim * imageChannels; % number of input units outputSize = visibleSize; % number of output units hiddenSize = 400; % number of hidden units epsilon = 0.1; % epsilon for ZCA whitening poolDim = 19; % dimension of pooling region %%====================================================================== %% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn % features from color patches. If you have completed the linear decoder % execise, use the features that you have obtained from that exercise, % loading them into optTheta. Recall that we have to keep around the % parameters used in whitening (i.e., the ZCA whitening matrix and the % meanPatch) % --------------------------- YOUR CODE HERE -------------------------- % Train the sparse autoencoder and fill the following variables with % the optimal parameters: %optTheta = zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1); %ZCAWhite = zeros(visibleSize, visibleSize); %meanPatch = zeros(visibleSize, 1); load STL10Features.mat; % -------------------------------------------------------------------- % Display and check to see that the features look good W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize); b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); displayColorNetwork( (W*ZCAWhite)'); %%====================================================================== %% STEP 2: Implement and test convolution and pooling % In this step, you will implement convolution and pooling, and test them % on a small part of the data set to ensure that you have implemented % these two functions correctly. In the next step, you will actually % convolve and pool the features with the STL10 images. %% STEP 2a: Implement convolution % Implement convolution in the function cnnConvolve in cnnConvolve.m % Note that we have to preprocess the images in the exact same way % we preprocessed the patches before we can obtain the feature activations. load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels %% Use only the first 8 images for testing convImages = trainImages(:, :, :, 1:8); % NOTE: Implement cnnConvolve in cnnConvolve.m first! convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch); %% STEP 2b: Checking your convolution % To ensure that you have convolved the features correctly, we have % provided some code to compare the results of your convolution with % activations from the sparse autoencoder % For 1000 random points for i = 1:1000 featureNum = randi([1, hiddenSize]); imageNum = randi([1, 8]); imageRow = randi([1, imageDim - patchDim + 1]); imageCol = randi([1, imageDim - patchDim + 1]); patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum); patch = patch(:); patch = patch - meanPatch; patch = ZCAWhite * patch; features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9 fprintf('Convolved feature does not match activation from autoencoder\n'); fprintf('Feature Number : %d\n', featureNum); fprintf('Image Number : %d\n', imageNum); fprintf('Image Row : %d\n', imageRow); fprintf('Image Column : %d\n', imageCol); fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol)); fprintf('Sparse AE feature : %0.5f\n', features(featureNum, 1)); error('Convolved feature does not match activation from autoencoder'); end end disp('Congratulations! Your convolution code passed the test.'); %% STEP 2c: Implement pooling % Implement pooling in the function cnnPool in cnnPool.m % NOTE: Implement cnnPool in cnnPool.m first! pooledFeatures = cnnPool(poolDim, convolvedFeatures); %% STEP 2d: Checking your pooling % To ensure that you have implemented pooling, we will use your pooling % function to pool over a test matrix and check the results. testMatrix = reshape(1:64, 8, 8); expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ... mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ]; testMatrix = reshape(testMatrix, 1, 1, 8, 8); pooledFeatures = squeeze(cnnPool(4, testMatrix)); if ~isequal(pooledFeatures, expectedMatrix) disp('Pooling incorrect'); disp('Expected'); disp(expectedMatrix); disp('Got'); disp(pooledFeatures); else disp('Congratulations! Your pooling code passed the test.'); end %%====================================================================== %% STEP 3: Convolve and pool with the dataset % In this step, you will convolve each of the features you learned with % the full large images to obtain the convolved features. You will then % pool the convolved features to obtain the pooled features for % classification. % % Because the convolved features matrix is very large, we will do the % convolution and pooling 50 features at a time to avoid running out of % memory. Reduce this number if necessary stepSize = 50; assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize'); load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels load stlTestSubset.mat % loads numTestImages, testImages, testLabels pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ... floor((imageDim - patchDim + 1) / poolDim), ... floor((imageDim - patchDim + 1) / poolDim) ); pooledFeaturesTest = zeros(hiddenSize, numTestImages, ... floor((imageDim - patchDim + 1) / poolDim), ... floor((imageDim - patchDim + 1) / poolDim) ); tic(); for convPart = 1:(hiddenSize / stepSize) featureStart = (convPart - 1) * stepSize + 1; featureEnd = convPart * stepSize; fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd); Wt = W(featureStart:featureEnd, :); bt = b(featureStart:featureEnd); fprintf('Convolving and pooling train images\n'); convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ... trainImages, Wt, bt, ZCAWhite, meanPatch); pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis); pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis; toc(); clear convolvedFeaturesThis pooledFeaturesThis; fprintf('Convolving and pooling test images\n'); convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ... testImages, Wt, bt, ZCAWhite, meanPatch); pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis); pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis; toc(); clear convolvedFeaturesThis pooledFeaturesThis; end % You might want to save the pooled features since convolution and pooling takes a long time save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest'); toc(); %%====================================================================== %% STEP 4: Use pooled features for classification % Now, you will use your pooled features to train a softmax classifier, % using softmaxTrain from the softmax exercise. % Training the softmax classifer for 1000 iterations should take less than % 10 minutes. % Add the path to your softmax solution, if necessary % addpath /path/to/solution/ % Setup parameters for softmax softmaxLambda = 1e-4; numClasses = 4; % Reshape the pooledFeatures to form an input vector for softmax softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]); softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,... numTrainImages); softmaxY = trainLabels; options = struct; options.maxIter = 200; softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,... numClasses, softmaxLambda, softmaxX, softmaxY, options); %%====================================================================== %% STEP 5: Test classifer % Now you will test your trained classifer against the test images softmaxX = permute(pooledFeaturesTest, [1 3 4 2]); softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages); softmaxY = testLabels; [pred] = softmaxPredict(softmaxModel, softmaxX); acc = (pred(:) == softmaxY(:)); acc = sum(acc) / size(acc, 1); fprintf('Accuracy: %2.3f%%\n', acc * 100); % You should expect to get an accuracy of around 80% on the test images.
cnnConvolve.m
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch) %cnnConvolve Returns the convolution of the features given by W and b with %the given images % % Parameters: % patchDim - patch (feature) dimension % numFeatures - number of features % images - large images to convolve with, matrix in the form % images(r, c, channel, image number) % W, b - W, b for features from the sparse autoencoder % ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for % preprocessing % % Returns: % convolvedFeatures - matrix of convolved features in the form % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) patchSize = patchDim*patchDim; numImages = size(images, 4); imageDim = size(images, 1); imageChannels = size(images, 3); convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1); % Instructions: % Convolve every feature with every large image here to produce the % numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) % matrix convolvedFeatures, such that % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the % value of the convolved featureNum feature for the imageNum image over % the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1) % % Expected running times: % Convolving with 100 images should take less than 3 minutes % Convolving with 5000 images should take around an hour % (So to save time when testing, you should convolve with less images, as % described earlier) % -------------------- YOUR CODE HERE -------------------- % Precompute the matrices that will be used during the convolution. Recall % that you need to take into account the whitening and mean subtraction % steps WT = W*ZCAWhite; bT = b-WT*meanPatch; % -------------------------------------------------------- convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1); for imageNum = 1:numImages for featureNum = 1:numFeatures % convolution of image with feature matrix for each channel convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1); for channel = 1:3 % Obtain the feature (patchDim x patchDim) needed during the convolution % ---- YOUR CODE HERE ---- %feature = zeros(8,8); % You should replace this offset = (channel-1)*patchSize; feature = reshape(WT(featureNum,(offset+1):(offset+patchSize)),patchDim,patchDim); % ------------------------ % Flip the feature matrix because of the definition of convolution, as explained later feature = flipud(fliplr(squeeze(feature))); % Obtain the image im = squeeze(images(:, :, channel, imageNum)); % Convolve "feature" with "im", adding the result to convolvedImage % be sure to do a 'valid' convolution % ---- YOUR CODE HERE ---- convolveThisChannel = conv2(im,feature,'valid'); convolvedImage = convolvedImage + convolveThisChannel; %三個通道加起來,應該是指三個通道同時用來作判斷標準。 % ------------------------ end % Subtract the bias unit (correcting for the mean subtraction as well) % Then, apply the sigmoid function to get the hidden activation % ---- YOUR CODE HERE ---- convolvedImage = sigmoid(convolvedImage + bT(featureNum)); % ------------------------ % The convolved feature is the sum of the convolved values for all channels convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage; end end function sigm = sigmoid(x) sigm = 1 ./ (1 + exp(-x)); end end
cnnPool.m
function pooledFeatures = cnnPool(poolDim, convolvedFeatures) %cnnPool Pools the given convolved features % % Parameters: % poolDim - dimension of pooling region % convolvedFeatures - convolved features to pool (as given by cnnConvolve) % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) % % Returns: % pooledFeatures - matrix of pooled features in the form % pooledFeatures(featureNum, imageNum, poolRow, poolCol) % numImages = size(convolvedFeatures, 2); numFeatures = size(convolvedFeatures, 1); convolvedDim = size(convolvedFeatures, 3); pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim)); % -------------------- YOUR CODE HERE -------------------- % Instructions: % Now pool the convolved features in regions of poolDim x poolDim, % to obtain the % numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim) % matrix pooledFeatures, such that % pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the % value of the featureNum feature for the imageNum image pooled over the % corresponding (poolRow, poolCol) pooling region % (see http://ufldl/wiki/index.php/Pooling ) % % Use mean pooling here. % -------------------- YOUR CODE HERE -------------------- numBlocks = floor(convolvedDim/poolDim); %每一個維度總共分紅多少塊(57/19),這裏對於不一樣維數的數據,poolDim要選擇能恰好除盡的? for featureNum = 1:numFeatures for imageNum=1:numImages for poolRow = 1:numBlocks for poolCol = 1:numBlocks features = convolvedFeatures(featureNum,imageNum,(poolRow-1)*poolDim+1:poolRow*poolDim,(poolCol-1)*poolDim+1:poolCol*poolDim); pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(features(:)); end end end end end
結果:
Accuracy: 78.938%
與講義提到的80%左右差很少。
ps:講義地址:
http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
http://deeplearning.stanford.edu/wiki/index.php/Pooling
http://deeplearning.stanford.edu/wiki/index.php/Exercise:Convolution_and_Pooling