dropout經常用於抑制過擬合,pytorch也提供了很方便的函數。可是常常不知道dropout的參數p
是什麼意思。在TensorFlow中p
叫作keep_prob
,就一直覺得pytorch中的p
應該就是保留節點數的比例,可是實驗結果發現反了,實際上表示的是不保留節點數的比例。看下面的例子:python
a = torch.randn(10,1) >>> tensor([[ 0.0684], [-0.2395], [ 0.0785], [-0.3815], [-0.6080], [-0.1690], [ 1.0285], [ 1.1213], [ 0.5261], [ 1.1664]])
torch.nn.Dropout(0.5)(a) >>> tensor([[ 0.0000], [-0.0000], [ 0.0000], [-0.7631], [-0.0000], [-0.0000], [ 0.0000], [ 0.0000], [ 1.0521], [ 2.3328]])
torch.nn.Dropout(0)(a) >>> tensor([[ 0.0684], [-0.2395], [ 0.0785], [-0.3815], [-0.6080], [-0.1690], [ 1.0285], [ 1.1213], [ 0.5261], [ 1.1664]])
torch.nn.Dropout(0)(a) >>> tensor([[0.], [-0.], [0.], [-0.], [-0.], [-0.], [0.], [0.], [0.], [0.]])