当前位置: 代码迷 >> 综合 >> 反向传播算法之要点(Backpropagation Algorithm)
  详细解决方案

反向传播算法之要点(Backpropagation Algorithm)

热度:85   发布时间:2023-11-18 03:57:38.0

反向传播算法之要点(Backpropagation Algorithm)

Introduction

反向传播是一个很简单的算法,一个学习过微积分的人就能够轻松的理解。本文希望能避免让人打不起精神来看的冗余繁杂,简洁地把反向传播的算法的推导过程和求解过程进行简洁、清晰的表述。

本文目标读者:

大概了解反向传播,但是还没有理解反向传播的公式推导过程。

反向传播的要点只有3个公式,首先在此做总结如下:

  1. 符号解释:
符号 含义
wijlw_{ij}^lwijl? 第l-1层第j个神经元输入到第l层第i个神经元时所要乘的权重
bilb_i^lbil? 第l层第i个神经元的偏置
zilz^l_izil? 第l层第i个神经元的输入,zil=∑jwijajl?1+bilz^l_i=\sum_jw_{ij}a^{l-1}_j+b^l_izil?=j?wij?ajl?1?+bil?
aila^l_iail? 第l层第i个神经元的输出,ail=activation(zil)a^l_i=activation(z^l_i)ail?=activation(zil?)
CCC Cost function
δil\delta^l_iδil? δil=?C?zil\delta^l_i=\frac {\partial C}{\partial z^l_i}δil?=?zil??C?

tips:当没有加上下标的时候,表示一个列向量或矩阵

  1. 3个基本公式

?C?wl=δl?(al?1)T\frac {\partial C}{\partial w^l}= \delta^l \cdot (a^{l-1})^T?wl?C?=δl?(al?1)T

?C?bl=δl\frac {\partial C}{\partial b^l}=\delta^l?bl?C?=δl

δl=a′(zl)⊙((wl+1)Tδl+1)\delta^{l}=a'(z^l)\odot ((w^{l+1})^T\delta^{l+1})δl=a(zl)((wl+1)Tδl+1)

公式的推导

  1. 求解参数:已知δl\delta^lδl求解参数wl,blw^l,b^lwl,bl

已知:

zil=∑jwijlajl?1+bilz_i^l=\sum_j w^l_{ij} a^{l-1}_j + b^l_izil?=j?wijl?ajl?1?+bil?

推导:

?(elementwise form)?C?wijl=?C?zil?zil?wijl=δilajl?1,?C?bil=?C?zil?zil?bil=δiL\Rightarrow_{(elementwise \, form)} \frac {\partial C}{\partial w^l_{ij}}=\frac {\partial C}{\partial z^l_i} \frac {\partial z^l_i}{\partial w^l_{ij}}=\delta^l_i a^{l-1}_j ,\quad \frac {\partial C}{\partial b^l_{i}}=\frac {\partial C}{\partial z^l_i}\frac {\partial z^l_i}{\partial b^l_i}=\delta^L_i?(elementwiseform)??wijl??C?=?zil??C??wijl??zil??=δil?ajl?1?,?bil??C?=?zil??C??bil??zil??=δiL?
?(vector form)?C?wl=δl?(al?1)T,?C?bl=δl\Leftrightarrow_{(vector \,form)} \frac {\partial C}{\partial w^l}= \delta^l \cdot (a^{l-1})^T ,\quad \frac {\partial C}{\partial b^l}=\delta^l?(vectorform)??wl?C?=δl?(al?1)T,?bl?C?=δl

  1. 递推:已知δl+1\delta^{l+1}δl+1求解δl\delta^lδl

全微分Review:

Δz=f(x+Δx,y+Δy)?f(x,y)≈?f?xΔx+?f?yΔy\Delta z=f(x+\Delta x,y+\Delta y)-f(x,y) \approx \frac {\partial f}{\partial x}\Delta x+\frac {\partial f}{\partial y}\Delta yΔz=f(x+Δx,y+Δy)?f(x,y)?x?f?Δx+?y?f?Δy

??f(a1(x),a2(x),...,an(x))?x=∑i=1n?f?ai?ai?x\Rightarrow \frac {\partial f(a_1(x),a_2(x),...,a_n(x))}{\partial x}=\sum_{i=1}^n \frac {\partial f}{\partial a_i} \frac {\partial a_i}{\partial x} ??x?f(a1?(x),a2?(x),...,an?(x))?=i=1n??ai??f??x?ai??

推导:

δjl=?C?zjl=∑i?C?zil+1?zil+1?ajl?ajl?zjl=?ajl?zjl∑i?C?zil+1?zil+1?ajl=a′(zjl)∑iδil+1wijl+1\delta^{l}_j=\frac {\partial C}{\partial z^l_j}=\sum_i \frac {\partial C}{\partial z^{l+1}_i} \frac {\partial z^{l+1}_i}{\partial a^l_j} \frac {\partial a^l_j}{\partial z^l_j}=\frac {\partial a^l_j}{\partial z^l_j} \sum_i \frac {\partial C}{\partial z^{l+1}_i} \frac {\partial z^{l+1}_i}{\partial a^l_j}=a'(z^l_j)\sum_i \delta ^{l+1}_iw^{l+1}_{ij}δjl?=?zjl??C?=i??zil+1??C??ajl??zil+1???zjl??ajl??=?zjl??ajl??i??zil+1??C??ajl??zil+1??=a(zjl?)i?δil+1?wijl+1?

?vectorformδl=a′(zl)⊙((wl+1)Tδl+1)\Rightarrow_{vector form} \quad \delta^{l}=a'(z^l)\odot ((w^{l+1})^T\delta^{l+1})?vectorform?δl=a(zl)((wl+1)Tδl+1)

tip: “⊙\odot” 代表Hadamard积,两个向量的Hadamard积就是把他们的元素对应相乘,例:
(34)⊙(28)=(3?24?8)=(632)\begin{pmatrix} 3 \\ 4\end{pmatrix} \odot \begin{pmatrix} 2 \\ 8\end{pmatrix} = \begin{pmatrix} 3\cdot2 \\ 4\cdot 8\end{pmatrix}=\begin{pmatrix} 6 \\ 32\end{pmatrix}(34?)(28?)=(3?24?8?)=(632?)

Backpropagation Algorithm

backpropagation(x):

  1. 输入:令a1=xa^1=xa1=x
  2. 前向传播:for l=2,3,4,..,Lfor\,l=2,3,4,..,Lforl=2,3,4,..,L迭代式地计算ala^lal,并且保存它们。迭代公式:zl=wlal?1+bl?1z^{l}=w^la^{l-1}+b^{l-1}zl=wlal?1+bl?1,al=activation(zl)a^{l}=activation(z^{l})al=activation(zl)
  3. 计算输出层的误差:根据选择的Cost function,计算输出层的δL\delta^LδL
  4. 反向传播:for l=L,L?1,L?2,...,2for\, l=L,L-1,L-2,...,2forl=L,L?1,L?2,...,2根据当前的δl\delta^lδl计算并保存 ?C?wl\frac {\partial C}{\partial w^l}?wl?C??C?bl\frac {\partial C}{\partial b^l}?bl?C?,迭代计算δl:=a′(zl?1)⊙((wl)Tδl)\delta^{l}:=a'(z^{l-1})\odot ((w^{l})^T\delta^{l})δl:=a(zl?1)((wl)Tδl)
  5. 输出所有的微分:?C?wl\frac {\partial C}{\partial w^l}?wl?C??C?bl\frac {\partial C}{\partial b^l}?bl?C?, for l=2,3,...,Lfor\, l=2,3,...,Lforl=2,3,...,L,

Behind the Backpropagation

反向传播的本质是链式法则+动态规划。

整个计算图中,假设每个连边代表上层对下层进行求导,那么传统方法求解cost function关于某个参数的导数,根据链式法则,就需要计算从最后一层到这一个参数路径上的所有导数,然后再把他们乘起来。可想而知,计算复杂度随着网络的深度增加将会变得非常大。

在反向传播算法中,首先通过一个前向传播的过程计算并保存了每一层的输出,然后利用链式法则推导出了从后往前的递推公式,使得计算图上的每一条边只用计算一次,就能求出关于任何参数的导数。

  相关解决方案