最小二乘支持向量機及其 Pre-computed Kernel 的 matlab 實現

1. 最小二乘支持向量機的推導

與支持向量機(Support Vector Machines, SVM)不一樣,Suykens 等提出的最小二乘支持向量機(Least Squared Support Vector Machines, LSSVM)在 SVM 之上使用作出了 3 項改進:dom

  1. 使用偏差項e_{i}代替了鬆弛變量\xi_{i}
  2. 使用等式約束代替了不等式約束
  3. 使用二次損失函數 (1-y_{i} f\left(\mathbf{x}_{i}\right) )^{2} 代替了 hinge 損失\left(1-y_{i} f\left(\mathbf{x}_{i}\right)\right)_{+}

獲得 LSSVM 的目標函數爲--式 (1) :函數

\begin{array}{cc}{\min _{\boldsymbol{w}, b, \boldsymbol{e}}} & {\frac{1}{2} \boldsymbol{w}^{T} \boldsymbol{w}+\frac{C}{2} \sum_{i=1}^{n} e_{i}^{2}} \\ { s.t. } & {y_{i}=\boldsymbol{w}^{T} \phi\left(\boldsymbol{x}_{i}\right)+b+e_{i}, i=1, \cdots, n}\end{array}

其中\phi(\boldsymbol{x_{i}})爲投影到希爾伯特空間的樣本;\boldsymbol{w}b 爲分類器參數,分別是分類超平面斜率與偏移量;\boldsymbol{e_{i}}爲偏差項;C 爲正則項參數,控制對偏差項的懲罰程度。測試

與 SVM 的解法類似,可以使用拉格朗日乘子法來求解兩個分類器參數. 引入拉格朗日乘子 \alpha_{i} 獲得其拉格朗日函數爲 -- 式 (2):ui

L(\boldsymbol{w}, b, \boldsymbol{e} ,\boldsymbol{\alpha})=\frac{1}{2} \boldsymbol{w}^{T} \boldsymbol{w}+\frac{C}{2} \sum_{i=1}^{n} e_{i}^{2}-\sum_{i=1}^{n} \alpha_{i}\left(\boldsymbol{w}^{T} \phi\left(\boldsymbol{x}_{i}\right)+b+e_{i}-y_{i}\right)

將上式分別對\boldsymbol{w}be_{i}求偏導並令其爲0,可得:spa

\begin{array}{l}{\frac{\partial L}{\partial w}=0 \Rightarrow w=\sum_{i=1}^{n} \alpha_{i} \phi\left(\boldsymbol{x_{i}}\right)} \\ {\frac{\partial L}{\partial b}=0 \Rightarrow \sum_{i=1}^{n} \alpha_{i}=0} \\ {\frac{\partial L}{\partial e_{i}}=0 \Rightarrow \alpha_{i}=\lambda e_{i}, i=1,2, \ldots, n}\end{array}

與 SVM 要求解二次規劃問題(QP)不一樣,LSSVM 可經過求解線性規劃問題獲得模型參數,且此凸規劃知足 Slater 約束規範,最優解知足(Karush-Kuhn-Tucker,KKT),在 KKT條件中消去變量\boldsymbol{w}e_{i},可得以下等式:.net

\left[\begin{array}{cc}{0} & {\boldsymbol{y'}} \\ {\boldsymbol{y}} & {\boldsymbol{K}+\frac{1}{C} \boldsymbol{I}}\end{array}\right]\left[\begin{array}{l}{b}\\ \boldsymbol{{\alpha}}\end{array}\right]=\left[\begin{array}{l}{0} \\ {\boldsymbol{1}}\end{array}\right]

其中\boldsymbol{K}爲核矩陣與樣本標籤矩陣的點乘積,\boldsymbol{I}爲單位矩陣,\boldsymbol{1}爲全1的向量,經過求解此線性等式可求得拉格朗日乘子\boldsymbol{{\alpha}}以及偏移量b,與SVM相同,其對未知樣本\boldsymbol{x}的決策函數爲:code

\operatorname{sgn}(f(\boldsymbol{x}))=\operatorname{sgn}\left(\sum_{i=1}^{n} \alpha_{i} k\left(\boldsymbol{x}_{i}, \boldsymbol{x}\right)+b\right)

2. Pre-computed Kernel 的 matlab 實現

因爲只須要求解線性規劃問題, 因此LSSVM的實現很是簡單。所謂 Pre-computed Kernel 就是自定義的核函數,這與咱們經常使用的線性核、RBF核等經典核函數不一樣,咱們能夠本身定義一個核函數生成核矩陣。orm

LSSVM 的 Pre-computed Kernel 實現很是簡單,即便用自定義的核函數輸入訓練和預測函數中,因此將生成核矩陣的函數抽離出來:ci

createKernelMatrix.mget

function outputKernel = createKernelMatrix(data1, data2, param)
% input: 
% 		data1: train data matrix
% 		data2: train or test matrix
% 		param: a strcture contains all the paramters of kernel
%			   .type: kernel function type
%			   .gap : gap with of RBF kernel 
% output:
%		outputKernel: kernel matrix computed by data1 and data2
%

    % kernel parameters extraction
    type = param.kernelType;
    gap = param.gap;
    
    % kernel creation
    switch type
        case 'lin'
            outputKernel = data1 * data2';

        case 'rbf'
            gap = 2 * gap * gap;
            
            n1sq = sum(data1.^2,2);
            n1 = size(data1,1);
            n2sq = sum(data2.^2,2);
            n2 = size(data2,1);
            
            D = (ones(n2,1)*n1sq')' + ones(n1,1)*n2sq' -2*data1*data2';
            
            outputKernel = exp(-D / gap);
    end
end
複製代碼

這裏只定義了兩個經典核函數,想使用其餘核函數本身編寫就能夠。

訓練函數實現:

lssvmtrain.m

function model = lssvmtrain(kernel, label, param)
% input: 
% kernel: input kernel matrix generated by train data, eg: createKernelMatrix(trainData, trainData, param)
% label : train label
% param : a strcture contains all the paramters of model 
% output: 
% model: train model contains all informations

    % model parameters extraction
    cost = param.cost;

    % TrainKernelMatrix creation
    nb_data = length(label);

    TrainKernelMatrix = (label*label') .* kernel;
    UnitMatrix = eye(nb_data,nb_data);
    TrainKernelMatrix = TrainKernelMatrix + UnitMatrix ./ cost;
    
    % constrct formula
    rightPart = [0; ones(nb_data, 1)];
    leftPart = [0, label'; label, TrainKernelMatrix]; % solve liner program question results = linsolve(leftPart, rightPart); bias = results(1); % b coef = results(2 : end); % alpha yalphas = label .* coef; % y .* alphas % save to model model.coef = coef; model.bias = bias; model.oriKernel = kernel; model.trainLabel = label; model.yalphas = yalphas; end 複製代碼

預測函數實現:

lssvmpredict.m

function [predictLabel, accuracy, decisionValue] = lssvmpredict(model, kernel, testLabel)
% input:
% model: trained model
% kernel: kernel matrix generated by trainData and testData, eg: createKernelMatrix(trainData, testData, param)
% testLabel: label of test data
% output:
% predictLabel: vector
% accuracy: number 
% decisionValue: decision values 

nb_test = length(testLabel);

testKernel = kernel;
yalphasMat = model.yalphas * ones(1, nb_test);

decisionValue = sum(testKernel .* yalphasMat) + model.bias;
predictLabel = sign(decisionValue);

accuracy = sum(predictLabel' == testLabel) / nb_test;

end
複製代碼

3. 經典 ionosphere 數據集中與 SVM 的精度對比

下面在經典 ionosphere 數據集中使用剛剛編寫的 LSSVM 與 SVM 的精度進行對比。

test.m

clear;clc;

% ======================================================== %
% ======== prepare trian and test data ======= %
% ======================================================== %

data = load('ionosphere.mat');
label = data.y;
data = data.x;

nbData = length(label);
trainRatio = 0.7;
nbTrain = fix(nbData * trainRatio);

% ===== select front trainRatio as train data ===== %
% trainIndice = 1: nbTrain;
% testIndice = nbTrain + 1 : nbData;

% ===== select train data randomly ===== %
randIndice = randperm(nbData);
trainIndice = randIndice(1: nbTrain);
testIndice  = randIndice(nbTrain + 1 : nbData);

trainData = data(trainIndice, :);
trainLabel = label(trainIndice);
testData = data(testIndice, :);
testLabel = label(testIndice);

% ======================================================== %
% ======== compare progress ======= %
% ======================================================== %

% == LibSVM
model = libsvmtrain(trainLabel, trainData, '-t 2 -c 1 -g 1');
[libsvm_label, libsvm_acc, libsvm_decv] = libsvmpredict(testLabel, testData, model);


% == LSSVM
kernelParam.kernelType = 'rbf';
kernelParam.gap  = 1;
modelParam.cost = 1;

trainK = createKernelMatrix(trainData, trainData, kernelParam);
testK  = createKernelMatrix(trainData, testData, kernelParam);

lssvm_model = lssvmtrain(trainK, trainLabel, modelParam);
[lssvm_label, lssvm_acc, lsscm_decv] = lssvmpredict(lssvm_model, testK, testLabel);


fprintf('\n\n\nTest accuracy of SVM is: %f,\nTest accuracy of LSSVM is: %f \n', libsvm_acc(1)/100, lssvm_acc);
複製代碼

隨機選取訓練測試數據並嘗試幾組核函數和參數獲得的結果:

C:1, RBF, gap:1

Test accuracy of SVM is:   0.924528,
Test accuracy of LSSVM is: 0.896226 

Test accuracy of SVM is:   0.943396,
Test accuracy of LSSVM is: 0.952830 

Test accuracy of SVM is:   0.971698,
Test accuracy of LSSVM is: 0.971698 

Test accuracy of SVM is:   0.933962,
Test accuracy of LSSVM is: 0.952830 
複製代碼

C:1, RBF, gap:100

Test accuracy of SVM is:   0.688679,
Test accuracy of LSSVM is: 0.688679 

Test accuracy of SVM is:   0.613208,
Test accuracy of LSSVM is: 0.603774 
複製代碼

C:1, linear

Test accuracy of SVM is:   0.877358,
Test accuracy of LSSVM is: 0.839623 

Test accuracy of SVM is:   0.886792,
Test accuracy of LSSVM is: 0.877358 


Test accuracy of SVM is:   0.877358,
Test accuracy of LSSVM is: 0.905660 
複製代碼

C:100, linear

Test accuracy of SVM is:   0.915094,
Test accuracy of LSSVM is: 0.905660 

Test accuracy of SVM is:   0.858491,
Test accuracy of LSSVM is: 0.820755 

Test accuracy of SVM is:   0.905660,
Test accuracy of LSSVM is: 0.849057 
複製代碼

參考文獻

SUYKENS, Johan A K . Least squares support vector machines[J]. International Journal of Circuit Theory & Applications, 2002, 27(6):605-615.

相關文章
相關標籤/搜索