当前位置: 代码迷 >> 综合 >> 吴恩达深度学习课程 Class 2 Week 1 assignment123 学习记录
  详细解决方案

吴恩达深度学习课程 Class 2 Week 1 assignment123 学习记录

热度:86   发布时间:2023-11-22 21:42:00.0

本次作业相对简单,稍微比较复杂的是grad_check的部分,代码贴上来如下:

def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):"""Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_nArguments:parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.x -- input datapoint, of shape (input size, 1)y -- true "label"epsilon -- tiny shift to the input to compute approximated gradient with formula(1)Returns:difference -- difference (2) between the approximated gradient and the backward propagation gradient"""# Set-up variablesparameters_values, _ = dictionary_to_vector(parameters)grad = gradients_to_vector(gradients)num_parameters = parameters_values.shape[0]J_plus = np.zeros((num_parameters, 1))J_minus = np.zeros((num_parameters, 1))gradapprox = np.zeros((num_parameters, 1))# Compute gradapproxfor i in range(num_parameters):# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".# "_" is used because the function you have to outputs two parameters but we only care about the first one### START CODE HERE ### (approx. 3 lines)new_theta_plus = np.copy(parameters_values)new_theta_plus[i, 0] = new_theta_plus[i, 0] + epsilonJ_plus[i, 0], _ = forward_propagation_n(X, Y, vector_to_dictionary(new_theta_plus))### END CODE HERE #### Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".### START CODE HERE ### (approx. 3 lines)new_theta_minus = np.copy(parameters_values)new_theta_minus[i, 0] = new_theta_minus[i, 0] - epsilonJ_minus[i, 0], _ = forward_propagation_n(X, Y, vector_to_dictionary(new_theta_minus))### END CODE HERE #### Compute gradapprox[i]### START CODE HERE ### (approx. 1 line)gradapprox[i, 0] = (J_plus[i, 0] - J_minus[i, 0]) / (2 * epsilon)### END CODE HERE #### Compare gradapprox to backward propagation gradients by computing difference.### START CODE HERE ### (approx. 1 line)# Step 1'denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)# Step 2'numerator = np.linalg.norm(grad - gradapprox)# Step 3'difference = numerator / denominator### END CODE HERE ###if difference > 1.2e-7:print("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")else:print("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")return difference

运行后会发现产生了0.28的difference,修改了back_propagation中故意设置的两个错误后,偏差如下:

1.1890417878779317e-07

最开始我改了back_propagation中的两个错误后,发现仍提示difference过大,经上网查证,发现这个difference是对的,只是大家都把difference的阈值给做了调整。。。所以我也把阈值改成了1.2e-7(作业提供的代码中阈值为1e-7)。但这个difference和epsilon已经是一个量级了,偏差不算小,但应该也在可接受的范围内。

另外一个很迷惑的问题是,相同的代码在我的电脑上得到的偏差为1.189blabla,别人得到的就是1.188blabla,我在pycharm上和jupyter notebook上都进行了验证,发现和编译器好像没有关系。。难道这玩意和CPU有关吗。

  相关解决方案