如今有空整理一下關於深度學習中怎麼加入dropout方法來防止測試過程的過擬合現象。html
首先了解一下dropout的實現原理:git
這些理論的解釋在百度上有不少。。。。github
這裏重點記錄一下怎麼實現這一技術網絡
參考別人的博客,主要http://www.cnblogs.com/dupuleng/articles/4340293.html函數
講解一下用Matlab中的深度學習工具箱怎麼實現dropout工具
首先要載入工具包。DeepLearn Toolbox是一個很是有用的matlab deep learning工具包,下載地址:https://github.com/rasmusbergpalm/DeepLearnToolbox學習
要使用它首先要將該工具包添加到matlab的搜索路徑中,測試
一、將包複製到matlab 的toolbox中,做者的路徑是D:\program Files\matlab\toolbox\ui
二、在matlab的命令行中輸入: this
cd D:\program Files\matlab\toolbox\deepLearnToolbox\ addpath(gepath('D:\program Files\matlab\toolbox\deepLearnToolbox-master\') savepath %保存,這樣就不須要每次都添加一次
三、驗證添加是否成功,在命令行中輸入
which saesetup
果成功就會出現,saesetup.m的路徑D:\program Files\matlab\toolbox\deepLearnToolbox-master\SAE\saesetup.m
四、使用deepLearnToolbox 工具包,作一個簡單的demo,將autoencoder模型使用dropout先後的結果進行比較。
load mnist_uint8; train_x = double(train_x(1:2000,:)) / 255; test_x = double(test_x(1:1000,:)) / 255; train_y = double(train_y(1:2000,:)); test_y = double(test_y(1:1000,:)); %% //實驗一without dropout rand('state',0) sae = saesetup([784 100]); sae.ae{1}.activation_function = 'sigm'; sae.ae{1}.learningRate = 1; opts.numepochs = 10; opts.batchsize = 100; sae = saetrain(sae , train_x , opts ); visualize(sae.ae{1}.W{1}(:,2:end)'); nn = nnsetup([784 100 10]);% //初步構造了一個輸入-隱含-輸出層網絡,其中包括了 % //權值的初始化,學習率,momentum,激發函數類型, % //懲罰係數,dropout等 nn.W{1} = sae.ae{1}.W{1}; opts.numepochs = 10; % //Number of full sweeps through data opts.batchsize = 100; % //Take a mean gradient step over this many samples [nn, ~] = nntrain(nn, train_x, train_y, opts); [er, ~] = nntest(nn, test_x, test_y); str = sprintf('testing error rate is: %f',er); fprintf(str); %% //實驗二:with dropout rand('state',0) sae = saesetup([784 100]); sae.ae{1}.activation_function = 'sigm'; sae.ae{1}.learningRate = 1; opts.numepochs = 10; opts.bachsize = 100; sae = saetrain(sae , train_x , opts ); figure; visualize(sae.ae{1}.W{1}(:,2:end)'); nn = nnsetup([784 100 10]);% //初步構造了一個輸入-隱含-輸出層網絡,其中包括了 % //權值的初始化,學習率,momentum,激發函數類型, % //懲罰係數,dropout等 nn.dropoutFraction = 0.5; nn.W{1} = sae.ae{1}.W{1}; opts.numepochs = 10; % //Number of full sweeps through data opts.batchsize = 100; % //Take a mean gradient step over this many samples [nn, L] = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); str = sprintf('testing error rate is: %f',er); fprintf(str);