Deep Learning 學習隨記(四)自學習和非監督特徵學習

接着看講義,接下來這章應該是Self-Taught Learning and Unsupervised Feature Learning。 git

含義:數據庫

從字面上不難理解其意思。這裏的self-taught learning指的是用非監督的方法提取特徵,而後用監督方法進行分類。好比用稀疏自編碼+softmax regression。網絡

對於非監督特徵學習,有兩種類型,一類是self-taught learning,一類是semi-supervised learning。看他們的定義不如看講義中給出的那個簡單的例子:dom

假定有一個計算機視覺方面的任務,目標是區分汽車和摩托車圖像;也即訓練樣本里面要麼是汽車的圖像,要麼是摩托車的圖像。哪裏獲取大量的無類標數據呢?最簡單的方式多是到互聯網上下載一些隨機的圖像數據集,這這些數據上訓練出一個稀疏自編碼神經網絡,從中獲得有用的特徵。這個例子裏,無類標數據徹底來自於一個和帶類標數據不一樣的分佈(無類標數據集中,或許其中一些圖像包含汽車或者摩托車,可是不是全部的圖像都如此)。這種情形被稱爲自學習。ide

相反,若是有大量的無類標圖像數據,要麼是汽車圖像,要麼是摩托車圖像,僅僅是缺失了類標(沒有標註每張圖片究竟是汽車仍是摩托車)。也能夠用這些無類標數據來學習特徵。這種方式,即要求無類標樣本和帶類標樣本服從相同的分佈,有時候被稱爲半監督學習。在實踐中,經常沒法找到知足這種要求的無類標數據(到哪裏找到一個每張圖像不是汽車就是摩托車,只是丟失了類標的圖像數據庫?)所以,自學習被普遍的應用於從無類標數據集中學習特徵。學習

練習:this

下面是講義中的練習,要解決的仍是MNIST手寫庫的識別問題,主要過程就是稀疏自編碼提取特徵而後用softmax regression分類。編碼

一開始用一臺32位的機器跑,出現內存不夠的狀況,後來換了臺64位的機器纔好。主要代碼以下:spa

stlExercise.m:code

%% CS294A/CS294W Self-taught Learning Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  self-taught learning. You will need to complete code in feedForwardAutoencoder.m
%  You will also need to have implemented sparseAutoencoderCost.m and 
%  softmaxCost.m from previous exercises.
%
%% ======================================================================
%  STEP 0: Here we provide the relevant parameters values that will
%  allow your sparse autoencoder to get good filters; you do not need to 
%  change the parameters below.

inputSize  = 28 * 28;
numLabels  = 5;
hiddenSize = 200;
sparsityParam = 0.1; % desired average activation of the hidden units.
                     % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
                     %  in the lecture notes). 
lambda = 3e-3;       % weight decay parameter       
beta = 3;            % weight of sparsity penalty term   
maxIter = 400;

%% ======================================================================
%  STEP 1: Load data from the MNIST database
%
%  This loads our training and test data from the MNIST database files.
%  We have sorted the data for you in this so that you will not have to
%  change it.

% Load MNIST database files
mnistData   = loadMNISTImages('mnist/train-images-idx3-ubyte');
mnistLabels = loadMNISTLabels('mnist/train-labels-idx1-ubyte');

% Set Unlabeled Set (All Images)

% Simulate a Labeled and Unlabeled set
labeledSet   = find(mnistLabels >= 0 & mnistLabels <= 4);
unlabeledSet = find(mnistLabels >= 5);

numTrain = round(numel(labeledSet)/2);
trainSet = labeledSet(1:numTrain);
testSet  = labeledSet(numTrain+1:end);

unlabeledData = mnistData(:, unlabeledSet);

trainData   = mnistData(:, trainSet);
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5

testData   = mnistData(:, testSet);
testLabels = mnistLabels(testSet)' + 1;   % Shift Labels to the Range 1-5

% Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, 2));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, 2));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, 2));

%% ======================================================================
%  STEP 2: Train the sparse autoencoder
%  This trains the sparse autoencoder on the unlabeled training
%  images. 

%  Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize);

%% ----------------- YOUR CODE HERE ----------------------
%  Find opttheta by running the sparse autoencoder on
%  unlabeledTrainingImages

opttheta = theta; 
%  Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
                          % function. Generally, for minFunc to work, you
                          % need a function pointer with two outputs: the
                          % function value and the gradient. In our problem,
                          % sparseAutoencoderCost.m satisfies this.
options.maxIter = 400;      % Maximum number of iterations of L-BFGS to run 
options.display = 'on';


[opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                    inputSize, hiddenSize, ...
                                    lambda, sparsityParam, ...
                                    beta, unlabeledData), ...
                                theta, options);

%% -----------------------------------------------------
                          
% Visualize weights
W1 = reshape(opttheta(1:hiddenSize * inputSize), hiddenSize, inputSize);
display_network(W1');

%======================================================================
%% STEP 3: Extract Features from the Supervised Dataset
%  
%  You need to complete the code in feedForwardAutoencoder.m so that the 
%  following command will extract features from the data.

trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
                                       trainData);

testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
                                       testData);

%======================================================================
%% STEP 4: Train the softmax classifier

softmaxModel = struct;  
%% ----------------- YOUR CODE HERE ----------------------
%  Use softmaxTrain.m from the previous exercise to train a multi-class
%  classifier. 

%  Use lambda = 1e-4 for the weight regularization for softmax

% You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels
options.maxIter = 100;
softmax_lambda = 1e-4;
inputSize = 200;              %features的維度與data的維度不同了
softmaxModel = softmaxTrain(inputSize, numLabels, softmax_lambda, ...
                            trainFeatures, trainLabels, options);


%% -----------------------------------------------------


%%======================================================================
%% STEP 5: Testing 

%% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel
[pred] = softmaxPredict(softmaxModel, testFeatures);
acc = mean(testLabels(:) == pred(:));
fprintf('Accuracy: %0.3f%%\n', acc * 100);

%% -----------------------------------------------------

% Classification Score
fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:)));

% (note that we shift the labels by 1, so that digit 0 now corresponds to
%  label 1)
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
% 

 

 feedForwardAutoencoder.m:

function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data)

% theta: trained weights from the autoencoder
% visibleSize: the number of input units (probably 64) 
% hiddenSize: the number of hidden units (probably 25) 
% data: Our matrix containing the training data as columns.  So, data(:,i) is the i-th training example. 
  
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this 
% follows the notation convention of the lecture notes. 

W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.
activation = W1*data+repmat(b1,[1,size(data,2)]);
activation = sigmoid(activation);

%-------------------------------------------------------------------

end

%-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients.  This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). 

function sigm = sigmoid(x)
    sigm = 1 ./ (1 + exp(-x));
end

 實驗結果以下:

最終的正確率:

講義和代碼中提到正確率在98.3%,基本差很少。

相關文章
相關標籤/搜索