当前位置: 代码迷 >> 综合 >> Course 2 改善深层神经网络 Week 1 一维函数和多层神经网络的梯度检查
  详细解决方案

Course 2 改善深层神经网络 Week 1 一维函数和多层神经网络的梯度检查

热度:47   发布时间:2023-12-12 12:09:41.0

一维函数的梯度检查

软件包导入

import numpy as np
import matplotlib.pyplot as plt

反向传播计算梯度 ? J ? θ \frac{\partial J}{\partial \theta} ?θ?J?, θ \theta θ 表示模型中的参数,使用前向传播和损失函数计算 J J J ,因为向前传播相对容易实现,所以您确信自己得到了正确的结果,所以您几乎100%确定您正确计算了 J J J。因此,您可以使用您的代码来计算 J J J验证反向传播计算的梯度 ? J ? θ \frac{\partial J}{\partial \theta} ?θ?J?
导数(或梯度)的定义:
? J ? θ = lim ? ε → 0 J ( θ + ε ) ? J ( θ ? ε ) 2 ε \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} ?θ?J?=ε0lim?2εJ(θ+ε)?J(θ?ε)?

一维函数传播示意图
在这里插入图片描述
首先需用 x x x前向传播计算得到 J ( x ) = θ ? x J(x)= \theta * x J(x)=θ?x ,然后由反向传播计算得到 ? J ? θ \frac{\partial J}{\partial \theta} ?θ?J?

def forward_propagation(x, theta):'''实现一元函数的线性前向传播(计算J) J(theta)= theta * X:param x: 输入:param theta: θ,乘数:return:J:函数J(θ)= θ * X的值'''J = np.dot(theta, x)return J
def backward_propagation(x, theta):'''计算J(theta)= theta * X相对于θ的导数:param x: 输入:param theta: θ,实数:return:dtheta:相对于θc的成本梯度'''dtheta = xreturn dtheta

一维函数梯度检查

梯度检查的步骤如下,首先计算线性误差"gradapprox",精度是 ε \varepsilon ε.
1. θ + = θ + ε \theta^{+} = \theta + \varepsilon θ+=θ+ε
2. θ ? = θ ? ε \theta^{-} = \theta - \varepsilon θ?=θ?ε
3. J + = J ( θ + ) J^{+} = J(\theta^{+}) J+=J(θ+)
4. J ? = J ( θ ? ) J^{-} = J(\theta^{-}) J?=J(θ?)
5. g r a d a p p r o x = J + ? J ? 2 ε gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon} gradapprox=2εJ+?J??

  • 反向传播计算得到各个值的梯度值 “grad”,然后与"gradapprox"计算出两者的欧几里得范数:
    (2) d i f f e r e n c e = ∣ ∣ g r a d ? g r a d a p p r o x ∣ ∣ 2 ∣ ∣ g r a d ∣ ∣ 2 + ∣ ∣ g r a d a p p r o x ∣ ∣ 2 difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2} difference=grad2?+gradapprox2?grad?gradapprox2??(2)
    需要计算上面的公式:
    • 1’. 计算分子用 np.linalg.norm(...)
    • 2’. 计算分母用两次np.linalg.norm(...)
    • 3’. 然后相除
      当difference小于 1 0 ? 7 10^{-7} 10?7时,通常认为计算结果是正确的。
def gradient_check(x, theta, epsilon=1e-7):"""实现微分和导数间的计算,进行梯度检验Arguments:x:实值输入theta :参数,也是实值epsilon :微小偏移以计算近似梯度Returns:difference:近似梯度gradapprox和后向传播梯度grad之间的差值"""# 计算gradapproxthetaplus = theta + epsilon  # Step 1thetaminus = theta - epsilon  # Step 2J_plus = forward_propagation(x, thetaplus)  # Step 3J_minus = forward_propagation(x, thetaminus)  # Step 4gradapprox = (J_plus - J_minus) / (2 * epsilon)  # Step 5# 检查gradapprox是否和反向传播backward_propagation()输出grad相接近grad = backward_propagation(x, theta)numerator = np.linalg.norm(grad - gradapprox)  # Step 1'denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)  # Step 2'difference = numerator / denominator  # Step 3'if difference < 1e-7:print("梯度检查:梯度正常")else:print("梯度检查:超出阈值")return difference

测试一下

#测试gradient_check
print("-----------------测试gradient_check-----------------")
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))

测试结果

-----------------测试gradient_check-----------------
梯度检查:梯度正常!
difference = 2.91933588329e-10

多层神经网络参数的梯度检查

在这里插入图片描述

前向传播计算

def sigmoid(x):"""Compute the sigmoid of xArguments:x -- A scalar or numpy array of any size.Return:s -- sigmoid(x)"""s = 1 / (1 + np.exp(-x))return sdef relu(x):"""Compute the relu of xArguments:x -- A scalar or numpy array of any size.Return:s -- relu(x)"""s = np.maximum(0, x)return sdef forward_propagation_n(X, Y, parameters):"""实现图中的前向传播(并计算成本)。参数:X - 训练集为m个例子Y - m个示例的标签parameters - 包含参数“W1”,“b1”,“W2”,“b2”,“W3”,“b3”的python字典:W1 - 权重矩阵,维度为(5,4)b1 - 偏向量,维度为(5,1)W2 - 权重矩阵,维度为(3,5)b2 - 偏向量,维度为(3,1)W3 - 权重矩阵,维度为(1,3)b3 - 偏向量,维度为(1,1)返回:cost - 成本函数(logistic)"""m = X.shape[1]W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]W3 = parameters["W3"]b3 = parameters["b3"]# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOIDZ1 = np.dot(W1, X) + b1A1 = relu(Z1)Z2 = np.dot(W2, A1) + b2A2 = relu(Z2)Z3 = np.dot(W3, A2) + b3A3 = sigmoid(Z3)# 计算成本logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y)cost = (1 / m) * np.sum(logprobs)cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)return cost, cache

反向传播

因为这里层数比较浅,没有直接用到relu_backward()sigmoid_backward()

def backward_propagation_n(X, Y, cache):"""实现图中所示的反向传播。参数:X - 输入数据点(输入节点数量,1)Y - 标签cache - 来自forward_propagation_n()的cache输出返回:gradients - 一个字典,其中包含与每个参数、激活和激活前变量相关的成本梯度。"""m = X.shape[1](Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cachedZ3 = A3 - Y # 需要考究一下dW3 = (1. / m) * np.dot(dZ3, A2.T)dW3 = 1. / m * np.dot(dZ3, A2.T)db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)dA2 = np.dot(W3.T, dZ3)dZ2 = np.multiply(dA2, np.int64(A2 > 0))# dW2 = 1. / m * np.dot(dZ2, A1.T) * 2 # Should not multiply by 2dW2 = 1. / m * np.dot(dZ2, A1.T)db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)dA1 = np.dot(W2.T, dZ2)dZ1 = np.multiply(dA1, np.int64(A1 > 0))dW1 = 1. / m * np.dot(dZ1, X.T)# db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True) # Should not multiply by 4db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True)gradients = {
    "dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}return gradients

参数格式转换

如果想比较"gradapprox" 与反向传播计算的梯度。 该公式仍然是:
? J ? θ = lim ? ε → 0 J ( θ + ε ) ? J ( θ ? ε ) 2 ε \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} ?θ?J?=ε0lim?2εJ(θ+ε)?J(θ?ε)?
然而, θ \theta θ不再是标量。 这是一个名为"parameters"的字典。 我们为你实现了一个函数"dictionary_to_vector()"。 它将"parameters" 字典转换为一个称为 “values"的向量,通过将所有参数(W1,b1,W2,b2,W3,b3)重塑为向量并将它们连接起来而获得。反函数是”vector_to_dictionary",它返回“parameters”字典。
在这里插入图片描述

def dictionary_to_vector(parameters):"""Roll all our parameters dictionary into a single vector satisfying our specific required shape."""keys = []count = 0for key in ["W1", "b1", "W2", "b2", "W3", "b3"]:# flatten parameternew_vector = np.reshape(parameters[key], (-1, 1))  # 将元素转化为一行(列值为1)keys = keys + [key] * new_vector.shape[0]if count == 0:theta = new_vectorelse:theta = np.concatenate((theta, new_vector), axis=0)count = count + 1return theta, keysdef vector_to_dictionary(theta):"""Unroll all our parameters dictionary from a single vector satisfying our specific required shape."""parameters = {
    }parameters["W1"] = theta[:20].reshape((5, 4))parameters["b1"] = theta[20:25].reshape((5, 1))parameters["W2"] = theta[25:40].reshape((3, 5))parameters["b2"] = theta[40:43].reshape((3, 1))parameters["W3"] = theta[43:46].reshape((1, 3))parameters["b3"] = theta[46:47].reshape((1, 1))return parameters

L层梯度检查具体实现

这里是伪代码,可以帮助你实现梯度检查:
For each i in num_parameters:

  • To compute J_plus[i]:
    1. Set θ + \theta^{+} θ+ to np.copy(parameters_values)
    2. Set θ i + \theta^{+}_i θi+? to θ i + + ε \theta^{+}_i + \varepsilon θi+?+ε
    3. Calculate J i + J^{+}_i Ji+? using to forward_propagation_n(x, y, vector_to_dictionary( θ + \theta^{+} θ+ )).
  • To compute J_minus[i]: do the same thing with θ ? \theta^{-} θ?
  • 计算近似梯度 g r a d a p p r o x [ i ] = J i + ? J i ? 2 ε gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon} gradapprox[i]=2εJi+??Ji???, gradapprox是个向量, gradapprox[i]对应每个参数的近似梯度值。
  • 反向传播计算 g r a d s grads grads
  • 计算误差
    d i f f e r e n c e = ∥ g r a d ? g r a d a p p r o x ∥ 2 ∥ g r a d ∥ 2 + ∥ g r a d a p p r o x ∥ 2 difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } difference=grad2?+gradapprox2?grad?gradapprox2??
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):"""检查backward_propagation_n是否正确计算forward_propagation_n输出的成本梯度参数:parameters - 包含参数“W1”,“b1”,“W2”,“b2”,“W3”,“b3”的python字典:gradient:后向传播的输出,包含对应于每个参数的成本函数的导数x - 输入数据点,维度为(输入节点数量,1)y - 标签epsilon - 计算输入的微小偏移以计算近似梯度返回:difference - 近似梯度和后向传播梯度之间的误差"""# 初始化参数parameters_values, keys = dictionary_to_vector(parameters)  # keys用不到grad = gradients_to_vector(gradients)num_parameters = parameters_values.shape[0]J_plus = np.zeros((num_parameters, 1))J_minus = np.zeros((num_parameters, 1))gradapprox = np.zeros((num_parameters, 1))# 计算gradapproxfor i in range(num_parameters):# 计算J_plus [i]。输入:“parameters_values,epsilon”。输出=“J_plus [i]”thetaplus = np.copy(parameters_values)  # Step 1thetaplus[i][0] = thetaplus[i][0] + epsilon  # Step 2J_plus[i], cache = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus))  # Step 3 ,cache用不到# 计算J_minus [i]。输入:“parameters_values,epsilon”。输出=“J_minus [i]”。thetaminus = np.copy(parameters_values)  # Step 1thetaminus[i][0] = thetaminus[i][0] - epsilon  # Step 2J_minus[i], cache = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus))  # Step 3 ,cache用不到# 计算gradapprox[i]gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)# 通过计算差异比较gradapprox和后向传播梯度。numerator = np.linalg.norm(grad - gradapprox)  # Step 1'denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)  # Step 2'difference = numerator / denominator  # Step 3'if difference < 1e-7:print("梯度检查:梯度正常!")else:print("梯度检查:梯度超出阈值!")return difference
  相关解决方案