BP神經網絡詳解和python實現

概述

神經網絡結構由輸入層,隱藏層和輸出層構成,神經網絡中的每個結點都與上一層全部的結點都有鏈接,咱們稱之爲全鏈接,以下圖html

在圖中的神經網絡中,原始的輸入數據,經過第一層隱含層的計算得出的輸出數據,會傳到第二層隱含層。而第二層的輸出,又會做爲輸出層的輸入數據。python

向前傳播

計算出輸入層數據傳輸到隱藏層:

由下圖可知咱們能夠計算出隱藏層的第一個神經元Z_1的值:算法

Z_1 = X_1 * W_{11} + X_2 * W_{12} + X_3 * W_{13} + b_{1}

\alpha_{1} = f(Z_1)

其中f(.)爲激活函數markdown

由下圖可知咱們能夠計算出隱藏層的第二個神經元Z_2的值:網絡

Z_2 = X_1 * W_{21} + X_2 * W_{22} + X_3 * W_{23} + b_{1}

\alpha_{2} = f(Z_2)

由下圖可知咱們能夠計算出隱藏層的第三個神經元Z_3的值:app

Z_3 = X_1 * W_{31} + X_2 * W_{32} + X_3 * W_{33} + b_{1}

\alpha_{3} = f(Z_2)

到此咱們己經算出全部從輸入層到隱藏層的全部值。dom

計算隱藏層到輸出層的過程:

由下圖可知咱們能夠計算出隱藏層到第一個輸出神經元Z_{4}的值:函數

Z_{4} = \alpha_{1} * W_{41} + \alpha_{2} * W_{42} + \alpha_{3} * W_{43} + b_{2}

\alpha_{4} = f(Z_{4})

同理能夠得出Z_{5}Z_{6}的值:oop

Z_{5} = \alpha_{1} * W_{51} + \alpha_{2} * W_{52} + \alpha_{3} * W_{53} + b_{2}

\alpha_{5} = f(Z_{5})

Z_{6} = \alpha_{1} * W_{61} + \alpha_{2} * W_{62} + \alpha_{3} * W_{63} + b_{2}

\alpha_{6} = f(Z_{6})

爲了簡化咱們之後的數據處理流程,如今咱們設第l層的輸入數據爲向量\alpha^{l},權重爲W^{l},偏置變量爲b^{l}。則咱們從上面的求解流程能夠得出l+1層的數據爲:學習

z^{l+1} = \alpha^{l} * W^{l} + b^{l} \cdots(1)

\alpha^{l+1} = f(z^{l+1}) \cdots(2)

至此神經網絡的前向傳播過程己經講完。

反向傳播

反向轉播的思想就是,咱們經過前向傳播後計算出網絡的輸出值,知道輸出值後咱們就能夠求出輸出層的殘差,再從輸出層反向把殘差傳回各層的神經元中。

假設咱們有一個固定樣本集 \{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)})\},它包含m 個樣例。咱們能夠用批量梯度降低法來求解神經網絡。具體來說,對於單個樣例(x,y),其代價函數以下圖(摘自網絡):

咱們能夠定義總體代價函數以下圖(摘自網絡):

\begin{align} \\
J(W,b) &= \left[ \frac{1}{m} \sum_{i=1}^mJ(W,b;x^{(i),y^{(i)}}) \right] + \frac{\lambda}{2}\sum_{l=1}^{n_l-1} \sum_{i=1}^{s_l + 1} \sum_{j=1}^{s_{l}}(W_{ij}^{(l)})^{2} \\
   &= \left[ \frac{1}{m} \sum_{i=1}^m\frac{1}{2} (h_{W,b}(x^{(i)}) - y^{(i)})^{2} \right] + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \sum_{i=1}^{s_l + 1} \sum_{j=1}^{s_{l}}(W_{ij}^{(l)})^{2} \\
\end{align}

以上關於J(W,b)定義中的第一項是一個均方差項。第二項是一個規則化項(也叫權重衰減項),其目的是減少權重的幅度,防止過分擬合。

有了整體代價函數後,咱們的目標能夠轉化成求代價函數的最小值。咱們使用梯度降低算法求代價函數的最小值,因此咱們得出如下公式:

W^{l}_{ij} = W^{l}_{ij} - \eta \frac{\partial J(W, b)}{\partial W_{ij}}

b^{l}_{i} = b^{l}_{i} - \eta \frac{\partial J(W, b)}{\partial b_i}

咱們對整體代價函數求偏導:

\begin{align} \\ 
\frac{\partial J(W, b)}{\partial W^{l}_{ij}} &= \left[ \frac{1}{m}\frac{\partial}{\partial W^{l}_{ij}} \sum_{i=1}^mJ(W,b;x^{(i),y^{(i)}}) \right] + \lambda W^{l}_{ij} \\
 &= \left[ \frac{1}{m} \sum_{i=1}^m \frac{\partial}{\partial W^{l}_{ij}}J(W,b;x^{(i),y^{(i)}}) \right] + \lambda W^{l}_{ij} \\
\end{align}

\begin{align} \\ 
\frac{\partial J(W, b)}{\partial b^{l}_{i}} &= \left[ \frac{1}{m}\frac{\partial}{\partial b^{l}_{i}} \sum_{i=1}^mJ(W,b;x^{(i),y^{(i)}}) \right] \\
 &= \left[ \frac{1}{m} \sum_{i=1}^m \frac{\partial}{\partial b^{l}_{i}}J(W,b;x^{(i),y^{(i)}}) \right] \\
\end{align}

經過上面兩個式子能夠看出,咱們把問題轉化成求\frac{\partial}{\partial W^{l}_{ij}}J(W,b;x^{(i),y^{(i)}})\frac{\partial}{\partial b^{l}_{i}}J(W,b;x^{(i),y^{(i)}})的值

因此對於第 n_l 層(輸出層)的每一個輸出單元 i,咱們根據如下公式計算殘差(下圖摘自網絡):

l = n_l-1, n_l-2, n_l-3, \ldots, 2的各個層,第 l層的第 i個節點的殘差計算方法以下(下圖摘自網絡):

以上逐次從後向前求導的過程即爲「反向傳導」的本意所在。

計算咱們須要的偏導數,計算方法以下:

BP神經網絡python實現

# -*- coding: utf-8 -*-
''' Created on @author: Belle '''
from numpy.random.mtrand import randint
import numpy as np


'''雙曲函數'''
def tanh(value):
    return (1 / (1 + np.math.e ** (-value)))

'''雙曲函數的導數'''
def tanhDer(value):
    tanhValue = tanh(value)
    return tanhValue * (1 - tanhValue)

''' Bp神經網絡model '''
class BpNeuralNetWorkModel:
    def __init__(self, trainningSet, label, layerOfNumber, studyRate):
        '''學習率'''
        self.studyRate = studyRate
        '''計算隱藏層神經元的數量'''
        self.hiddenNeuronNum = int(np.sqrt(trainningSet.shape[1] + label.shape[1]) + randint(1, 10))
        '''層數據'''
        self.layers = []
        '''建立輸出層'''
        currentLayer = Layer()
        currentLayer.initW(trainningSet.shape[1], self.hiddenNeuronNum)
        self.layers.append(currentLayer)
        
        '''建立隱藏層'''
        for index in range(layerOfNumber - 1):
            currentLayer = Layer()
            self.layers.append(currentLayer)
            '''輸出層後面不須要求權重值'''
            if index == layerOfNumber - 2:
                break
            nextLayNum = 0
            
            '''初始化各個層的權重置'''
            if index == layerOfNumber - 3:
                '''隱藏層到輸出層'''
                nextLayNum = label.shape[1]
            else:
                '''隱藏層到隱藏層'''
                nextLayNum = self.hiddenNeuronNum
            currentLayer.initW(self.hiddenNeuronNum, nextLayNum)
        '''輸出層的分類值'''
        currentLayer = self.layers[len(self.layers) - 1]
        currentLayer.label = label
    
    '''神經網絡前向傳播'''
    def forward(self, trainningSet):
        '''計算輸入層的輸出值'''
        currentLayer = self.layers[0]
        currentLayer.alphas = trainningSet
        currentLayer.caculateOutPutValues()
        
        preLayer = currentLayer
        for index in range(1, len(self.layers)):
            currentLayer = self.layers[index]
            '''上一層的out put values就是這一層的zValues'''
            currentLayer.zValues = preLayer.outPutValues
            '''計算alphas'''
            currentLayer.caculateAlphas()
            '''最後一層不須要求輸出值,只要求出alpha'''
            if index == len(self.layers) - 1:
                break
            '''輸入層計算out puts'''
            currentLayer.caculateOutPutValues()
            '''指向上一層的layer'''
            preLayer = currentLayer
    
    '''神經網絡後向傳播'''
    def backPropogation(self):
        layerCount = len(self.layers)
        
        '''輸出層的殘差值'''
        currentLayer = self.layers[layerCount - 1]
        currentLayer.caculateOutPutLayerError()
        
        '''輸出層到隱藏層'''
        preLayer = currentLayer
        layerCount = layerCount - 1
        while layerCount >= 1:
            '''當前層'''
            currentLayer = self.layers[layerCount - 1]
            '''更新權重'''
            currentLayer.updateWeight(preLayer.errors, self.studyRate)
            if layerCount != 1:
                currentLayer.culateLayerError(preLayer.errors)
            layerCount = layerCount - 1
            preLayer = currentLayer
            
''' 建立層 '''
class Layer:
    def __init__(self):
        self.b = 0
    
    '''使用正態分佈的隨機值初始化w的值'''
    def initW(self, numOfAlpha, nextLayNumOfAlpha):
        self.w = np.mat(np.random.randn(nextLayNumOfAlpha, numOfAlpha))
    
    '''計算當前層的alphas'''
    def caculateAlphas(self):
        '''alpha = f(z)'''
        self.alphas = np.mat([tanh(self.zValues[row1,0]) for row1 in range(len(self.zValues))])
        '''求f'(z)的值(即f的導數值)'''
        self.zDerValues = np.mat([tanhDer(self.zValues[row1,0]) for row1 in range(len(self.zValues))])
    
    '''計算out puts'''
    def caculateOutPutValues(self):
        '''計算當前層z = w * alpha的的下一層的輸入值'''
        self.outPutValues = self.w * self.alphas.T + self.b
    
    '''計算輸出層的殘差'''
    def caculateOutPutLayerError(self):
        self.errors = np.multiply(-(self.label - self.alphas), self.zDerValues)
        print("out put layer alphas ..." + str(self.alphas))
    
    '''計算其它層的殘差'''
    def culateLayerError(self, preErrors):
        self.errors = np.mat([(self.w[:,column].T * preErrors.T * self.zDerValues[:,column])[0,0] for column in range(self.w.shape[1])])
    
    '''更新權重'''
    def updateWeight(self, preErrors, studyRate):
        data = np.zeros((preErrors.shape[1], self.alphas.shape[1]))
        for index in range(preErrors.shape[1]):
            data[index,:] = self.alphas * (preErrors[:,index][0,0])
        self.w = self.w - studyRate * data

''' 訓練神經網絡模型 @param train_set: 訓練樣本 @param labelOfNumbers: 訓練總類別 @param layerOfNumber: 神經網絡層數,包括輸出層,隱藏層和輸出層(默認只有一個輸入層,隱藏層和輸出層) '''
def train(train_set, label, layerOfNumber = 3, sampleTrainningTime = 5000, studyRate = 0.6):
    neuralNetWork = BpNeuralNetWorkModel(train_set, label, layerOfNumber, studyRate)
    '''訓練數據'''
    for row in range(train_set.shape[0]):
        '''當個樣本使用梯度降低的方法訓練sampleTrainningTime次'''
        for time in range(sampleTrainningTime):
            '''前向傳播 '''
            neuralNetWork.forward(train_set[row,:])
            '''反向傳播'''
            neuralNetWork.backPropogation()
            


複製代碼

測試代碼

# -*- coding: utf-8 -*-
''' Created on 2018��5��27�� @author: Belle '''

import BpNeuralNetWork
import numpy as np

train_set = np.mat([[0.05, 0.1], [0.3, 0.2]])
labelOfNumbers = np.mat([0.1, 0.99, 0.3])
layerOfNumber = 4

bpNeuralNetWork = BpNeuralNetWork.train(train_set, labelOfNumbers, layerOfNumber)

複製代碼

如下是測試代碼的輸出值

相關文章
相關標籤/搜索