與支持向量機(Support Vector Machines, SVM)不一樣,Suykens 等提出的最小二乘支持向量機(Least Squared Support Vector Machines, LSSVM)在 SVM 之上使用作出了 3 項改進:dom
獲得 LSSVM 的目標函數爲--式 (1) :函數
其中爲投影到希爾伯特空間的樣本;和 爲分類器參數,分別是分類超平面斜率與偏移量;爲偏差項; 爲正則項參數,控制對偏差項的懲罰程度。測試
與 SVM 的解法類似,可以使用拉格朗日乘子法來求解兩個分類器參數. 引入拉格朗日乘子 獲得其拉格朗日函數爲 -- 式 (2):ui
將上式分別對、、求偏導並令其爲0,可得:spa
與 SVM 要求解二次規劃問題(QP)不一樣,LSSVM 可經過求解線性規劃問題獲得模型參數,且此凸規劃知足 Slater 約束規範,最優解知足(Karush-Kuhn-Tucker,KKT),在 KKT條件中消去變量和,可得以下等式:.net
其中爲核矩陣與樣本標籤矩陣的點乘積,爲單位矩陣,爲全1的向量,經過求解此線性等式可求得拉格朗日乘子以及偏移量,與SVM相同,其對未知樣本的決策函數爲:code
因爲只須要求解線性規劃問題, 因此LSSVM的實現很是簡單。所謂 Pre-computed Kernel 就是自定義的核函數,這與咱們經常使用的線性核、RBF核等經典核函數不一樣,咱們能夠本身定義一個核函數生成核矩陣。orm
LSSVM 的 Pre-computed Kernel 實現很是簡單,即便用自定義的核函數輸入訓練和預測函數中,因此將生成核矩陣的函數抽離出來:ci
createKernelMatrix.mget
function outputKernel = createKernelMatrix(data1, data2, param)
% input:
% data1: train data matrix
% data2: train or test matrix
% param: a strcture contains all the paramters of kernel
% .type: kernel function type
% .gap : gap with of RBF kernel
% output:
% outputKernel: kernel matrix computed by data1 and data2
%
% kernel parameters extraction
type = param.kernelType;
gap = param.gap;
% kernel creation
switch type
case 'lin'
outputKernel = data1 * data2';
case 'rbf'
gap = 2 * gap * gap;
n1sq = sum(data1.^2,2);
n1 = size(data1,1);
n2sq = sum(data2.^2,2);
n2 = size(data2,1);
D = (ones(n2,1)*n1sq')' + ones(n1,1)*n2sq' -2*data1*data2';
outputKernel = exp(-D / gap);
end
end
複製代碼
這裏只定義了兩個經典核函數,想使用其餘核函數本身編寫就能夠。
訓練函數實現:
lssvmtrain.m
function model = lssvmtrain(kernel, label, param)
% input:
% kernel: input kernel matrix generated by train data, eg: createKernelMatrix(trainData, trainData, param)
% label : train label
% param : a strcture contains all the paramters of model
% output:
% model: train model contains all informations
% model parameters extraction
cost = param.cost;
% TrainKernelMatrix creation
nb_data = length(label);
TrainKernelMatrix = (label*label') .* kernel;
UnitMatrix = eye(nb_data,nb_data);
TrainKernelMatrix = TrainKernelMatrix + UnitMatrix ./ cost;
% constrct formula
rightPart = [0; ones(nb_data, 1)];
leftPart = [0, label'; label, TrainKernelMatrix]; % solve liner program question results = linsolve(leftPart, rightPart); bias = results(1); % b coef = results(2 : end); % alpha yalphas = label .* coef; % y .* alphas % save to model model.coef = coef; model.bias = bias; model.oriKernel = kernel; model.trainLabel = label; model.yalphas = yalphas; end 複製代碼
預測函數實現:
lssvmpredict.m
function [predictLabel, accuracy, decisionValue] = lssvmpredict(model, kernel, testLabel)
% input:
% model: trained model
% kernel: kernel matrix generated by trainData and testData, eg: createKernelMatrix(trainData, testData, param)
% testLabel: label of test data
% output:
% predictLabel: vector
% accuracy: number
% decisionValue: decision values
nb_test = length(testLabel);
testKernel = kernel;
yalphasMat = model.yalphas * ones(1, nb_test);
decisionValue = sum(testKernel .* yalphasMat) + model.bias;
predictLabel = sign(decisionValue);
accuracy = sum(predictLabel' == testLabel) / nb_test;
end
複製代碼
ionosphere
數據集中與 SVM 的精度對比下面在經典 ionosphere
數據集中使用剛剛編寫的 LSSVM 與 SVM 的精度進行對比。
test.m
clear;clc;
% ======================================================== %
% ======== prepare trian and test data ======= %
% ======================================================== %
data = load('ionosphere.mat');
label = data.y;
data = data.x;
nbData = length(label);
trainRatio = 0.7;
nbTrain = fix(nbData * trainRatio);
% ===== select front trainRatio as train data ===== %
% trainIndice = 1: nbTrain;
% testIndice = nbTrain + 1 : nbData;
% ===== select train data randomly ===== %
randIndice = randperm(nbData);
trainIndice = randIndice(1: nbTrain);
testIndice = randIndice(nbTrain + 1 : nbData);
trainData = data(trainIndice, :);
trainLabel = label(trainIndice);
testData = data(testIndice, :);
testLabel = label(testIndice);
% ======================================================== %
% ======== compare progress ======= %
% ======================================================== %
% == LibSVM
model = libsvmtrain(trainLabel, trainData, '-t 2 -c 1 -g 1');
[libsvm_label, libsvm_acc, libsvm_decv] = libsvmpredict(testLabel, testData, model);
% == LSSVM
kernelParam.kernelType = 'rbf';
kernelParam.gap = 1;
modelParam.cost = 1;
trainK = createKernelMatrix(trainData, trainData, kernelParam);
testK = createKernelMatrix(trainData, testData, kernelParam);
lssvm_model = lssvmtrain(trainK, trainLabel, modelParam);
[lssvm_label, lssvm_acc, lsscm_decv] = lssvmpredict(lssvm_model, testK, testLabel);
fprintf('\n\n\nTest accuracy of SVM is: %f,\nTest accuracy of LSSVM is: %f \n', libsvm_acc(1)/100, lssvm_acc);
複製代碼
隨機選取訓練測試數據並嘗試幾組核函數和參數獲得的結果:
C:1, RBF, gap:1
Test accuracy of SVM is: 0.924528,
Test accuracy of LSSVM is: 0.896226
Test accuracy of SVM is: 0.943396,
Test accuracy of LSSVM is: 0.952830
Test accuracy of SVM is: 0.971698,
Test accuracy of LSSVM is: 0.971698
Test accuracy of SVM is: 0.933962,
Test accuracy of LSSVM is: 0.952830
複製代碼
C:1, RBF, gap:100
Test accuracy of SVM is: 0.688679,
Test accuracy of LSSVM is: 0.688679
Test accuracy of SVM is: 0.613208,
Test accuracy of LSSVM is: 0.603774
複製代碼
C:1, linear
Test accuracy of SVM is: 0.877358,
Test accuracy of LSSVM is: 0.839623
Test accuracy of SVM is: 0.886792,
Test accuracy of LSSVM is: 0.877358
Test accuracy of SVM is: 0.877358,
Test accuracy of LSSVM is: 0.905660
複製代碼
C:100, linear
Test accuracy of SVM is: 0.915094,
Test accuracy of LSSVM is: 0.905660
Test accuracy of SVM is: 0.858491,
Test accuracy of LSSVM is: 0.820755
Test accuracy of SVM is: 0.905660,
Test accuracy of LSSVM is: 0.849057
複製代碼