本文將介紹一種基於深度學習和稀疏表達的人臉識別算法。算法
首先。利用深度學習框架(VGGFace)提取人臉特徵;其次,利用PCA對提取的特徵進行降維;最後,利用稀疏表達分類實現特徵匹配。我採用CMC曲線評價在AR數據庫上的識別性能。最後我還提供了整個過程的code。數據庫
如下介紹利用VGGFace對人臉特徵進行提取。咱們利用的數據庫爲AR數據庫。數據庫的圖比例如如下:
接下來咱們利用VGGFace對人臉特徵進行提取。ruby
利用pca對數據降維,VGGFace提取出的特徵爲4096維。對提取的特徵進行降維最後降到128維。markdown
數據庫一共同擁有
框架
最後咱們可以利用稀疏表達分類器來識別這個probe人臉
post
function cnn_vgg_faces()
%CNN_VGG_FACES Demonstrates how to use VGG-Face
clear all
clc
addpath PCA
run(fullfile(fileparts(mfilename('fullpath')),...
'..', 'matlab', 'vl_setupnn.m')) ;
net = load('data/models/vgg-face.mat') ;
list = dir('../data/AR');
C = 100;
img_list = list(3:end);
index = [1, 10];
%% 創建基於VGGFace的Gallery字典 dictionary = []; for i = 1:C disp(i) numEachGalImg(i) = 0; for j = 1:2 im = imread(strcat('../data/AR/',img_list((i-1)*26+index(j)).name)); im_ = single(im) ; % note: 255 range
im_ = imresize(im_, net.meta.normalization.imageSize(1:2)) ;
for k = 1:3
im1_(:,:,k) = im_;
end
im2_ = bsxfun(@minus,im1_,net.meta.normalization.averageImage) ;
res = vl_simplenn(net, im2_) ;
feature_p(:,j) = res(36).x(:);
end
numEachGalImg(i) = numEachGalImg(i) + size(feature_p,2);
dictionary = [dictionary feature_p];
end
%% PCA對特徵進行降維 FaceContainer = double(dictionary'); [pcaFaces W meanVec] = fastPCA(FaceContainer,128); X = pcaFaces; [X,A0,B0] = scaling(X); LFWparameter.mean = meanVec; LFWparameter.A = A0; LFWparameter.B = B0; LFWparameter.V = W; imfo = LFWparameter; train_fea = (double(FaceContainer)-repmat(imfo.mean, size(FaceContainer,1), 1))*imfo.V; dictionary = scaling(train_fea,1,imfo.A,imfo.B); for i = 1:size(dictionary, 1) dictionary(i,:) = dictionary(i,:)/norm(dictionary(i,:)); end dictionary = double(dictionary); totalGalKeys = sum(numEachGalImg); cumNumEachGalImg = [0; cumsum(numEachGalImg')]; %% 利用稀疏編碼進行特徵匹配
% sparse coding parameters
if ~exist('opt_choice', 'var')
opt_choice = 1;
end
num_bases = 128;
beta = 0.4;
batch_size = size(dictionary, 1);
num_iters = 5;
if opt_choice==1
sparsity_func= 'L1';
epsilon = [];
elseif opt_choice==2
sparsity_func= 'epsL1';
epsilon = 0.01;
end
Binit = [];
fname_save = sprintf('../results/sc_%s_b%d_beta%g_%s', sparsity_func, num_bases, beta, datestr(now, 30));
AtA = dictionary*dictionary'; for i = 1:C fprintf('%s \n',num2str(i)); tic im = imread(strcat('../data/AR/',img_list((i-1)*26+26).name)); im_ = single(im) ; % note: 255 range im_ = imresize(im_, net.meta.normalization.imageSize(1:2)) ; for k = 1:3 im1_(:,:,k) = im_; end im2_ = bsxfun(@minus,im1_,net.meta.normalization.averageImage) ; res = vl_simplenn(net, im2_) ; feature_p = res(36).x(:); feature_p = (double(feature_p)'-imfo.mean)*imfo.V;
feature_p = scaling(feature_p,1,imfo.A,imfo.B);
feature_p = feature_p/norm(feature_p, 2);
[B S stat] = sparse_coding(AtA,0, dictionary', double(feature_p'), num_bases, beta, sparsity_func, epsilon, num_iters, batch_size, fname_save, Binit);
for m = 1:length(numEachGalImg)
AA = S(cumNumEachGalImg(m)+1:cumNumEachGalImg(m+1),:);
X1 = dictionary(cumNumEachGalImg(m)+1:cumNumEachGalImg(m+1),:);
recovery = X1'*AA; YY(m) = mean(sum((recovery'-double(feature_p)).^2));
end
score(:,i) = YY;
toc
end
accuracy = calrank(score1,1:1,'ascend');
fprintf('rank-1:%d/%%\n',accuracy*100);
文中以
calrank可以計算獲得CMC曲線:參見http://blog.csdn.net/hlx371240/article/details/53482752。
最後獲得rank-1爲82%。
整個代碼見資源,由於vgg-face 太大,可以本身到vgg的官網下載,而後放到../matconvnet-1.0-beta19\examples\data\models中。性能