Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, c>0c>0. Show that the behaviour of the network doesn't change. 對於perceptron rule :git
兩邊同時乘上c,等式不變app
Suppose we have the same setup as the last problem - a network of perceptrons. Suppose also that the overall input to the network of perceptrons has been chosen. We won't need the actual input value, we just need the input to have been fixed. Suppose the weights and biases are such that w⋅x+b≠0 for the input x to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant c>0. Show that in the limit as c→∞the behaviour of this network of sigmoid neurons is exactly the same as the network of perceptrons. How can this fail when w⋅x+b=0 for one of the perceptrons?less
對於sigmoid 函數來講,(wx + b)同時乘上c,不影響wx + b > 0 或 wx + b < 0的結果,所以對於σ(wx + b) > 0.5 或 σ(wx + b) < 0.5 的斷定沒有影響。可是當wx + b = 0時,σ(wx + b) = 0.5,判斷不告終果的類別,所以不能進行二元分類。ide
There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous la!yer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01.函數
咱們先列出0~9的二進制數字:this
0 :0000code
1 :0001component
2 :0010orm
3 :0011three
4 :0100
5 :0101
6 :0110
7 :0111
8 :1000
9 :1001
output layer從上至下依次表示2^0, 2^1, 2^2, 2^3,那麼1,3,5,7,9會使2^0的output neural輸出1,而2,3,6,7會使2^1的output neural輸出1,依次類推。 那麼對於第一個output neural,咱們能夠令向量w=-1,1,-1,1,-1,1,-1,1,-1,1,b = 0則對於這個neural來講,wx + b = -1x0 + x1 - x2 + x3 - x4 + x5 - x6 + x7 - x8 + x9 > 0 當且僅當輸入中x1,x3,x5,x7,x9的值的和大於其他的和(正常狀況下只有一項爲0.99,其他都是0.01)。好比x5=0.99,其他是0.01,那麼wx + b = 0.98 > 0 ,因此σ(wx + b)> 0.5,2^0位輸出爲1。 同理,獲得4個ouput neural的w:
-1, 1,-1, 1,-1, 1,-1, 1,-1, 1
-1,-1, 1, 1,-1,-1, 1, 1,-1,-1
-1,-1,-1,-1, 1, 1, 1, 1,-1,-1
-1,-1,-1,-1,-1,-1,-1,-1, 1, 1
注意前面給出的條件是:正確的結果必定大於0.99,錯誤的必定小於0.01,這樣就避免了好比數字0爲0.8最大,其他的是0.7比0.8小,可是其餘的和加起來大於0.8,輸出錯誤的狀況。
Prove the assertion of the last paragraph. Hint: If you're not already familiar with the Cauchy-Schwarz inequality, you may find it helpful to familiarize yourself with it.
看了網上的一個答案:
I explained gradient descent when C is a function of two variables, and when it's a function of more than two variables. What happens when C is a function of just one variable? Can you provide a geometric interpretation of what gradient descent is doing in the one-dimensional case?
如圖,Δy = Δx * y`,容易看出y`的方向是y降低最快的方向。例如y=x^2,則y`=2x。迭代過程圖像以下:
An extreme version of gradient descent is to use a mini-batch size of just 1. That is, given a training input, x, we update our weights and biases according to the rules wk→w′k=wk−η∂Cx/∂wk and bl→b′l=bl−η∂Cx/∂bl. Then we choose another training input, and update the weights and biases again. And so on, repeatedly. This procedure is known as online, on-line, or incremental learning. In online learning, a neural network learns from just one training input at a time (just as human beings do). Name one advantage and one disadvantage of online learning, compared to stochastic gradient descent with a mini-batch size of, say, 20.
在Andrew Ng的machine learning課上,第二課就講了這個問題。不過對方法的名字有點疑惑。Andrew Ng課上,online learning方法叫stochastic gradient descent (also incremental gradient descent),而按參數循環迭代的方法叫batch gradient descent。可是本文stochastic/incremental指兩個不一樣的方法。。。。 拋開名字不說,在線的方法相對於離線方法來講,不用必須遍歷整個訓練集,每循環一個樣本,就迭代一次,所以一般在線方法比離線方法更快的收斂到最小值。可是,在先方法可能永遠到不了絕對的最小值,會一直在最小值附近徘徊。
Write out Equation a′=σ(wa+b) in component form, and verify that it gives the same result as the rule 1/(1+exp(−∑jwjxj−b)) for computing the output of a sigmoid neuron.
Try creating a network with just two layers - an input and an output layer, no hidden layer - with 784 and 10 neurons, respectively. Train the network using stochastic gradient descent. What classification accuracy can you achieve?
In [8]: net = Network.Network([784, 10]) In [9]: net.SGD(training_data, 10, 10, 3, test_data=test_data) Epoch 0: 1903 / 10000 Epoch 1: 1903 / 10000 Epoch 2: 1903 / 10000 Epoch 3: 1903 / 10000 Epoch 4: 1903 / 10000 Epoch 5: 1903 / 10000 Epoch 6: 1903 / 10000 Epoch 7: 1903 / 10000 Epoch 8: 1903 / 10000 Epoch 9: 1903 / 10000
準確率真是至關地低啊。