当前位置: 代码迷 >> 综合 >> Course 1 神经网络和深度学习 Week4 搭建两层神经网络识别猫图
  详细解决方案

Course 1 神经网络和深度学习 Week4 搭建两层神经网络识别猫图

热度:10   发布时间:2023-12-12 12:10:33.0

基本元素符号约定

  • 上标 [ l ] [l] [l]代表神经网络的层数 l t h l^{th} lth ,比如 a [ L ] a^{[L]} a[L] [ L ] [L] [L]层的激活, W [ L ] W^{[L]} W[L] [ L ] [L] [L]层的权重, b [ L ] b^{[L]} b[L] [ L ] [L] [L]层的偏置。
  • 上标 ( i ) (i) (i) 表示第 i t h i^{th} ith个样本,比如 x ( i ) x^{(i)} x(i)是第 i t h i^{th} ith个训练样本。
  • 下标 i i i 表示 [ l ] [l] [l]层的第 i t h i^{th} ith 项, 比如 a i [ l ] a^{[l]}_i ai[l]? 表示第 l t h l^{th} lth层的第 i t h i^{th} ith个激活项
W的维度 b的维度 激活值的计算 激活值的维度
第1层 ( n [ 1 ] , 12288 ) (n^{[1]},12288) (n[1],12288) ( n [ 1 ] , 1 ) (n^{[1]},1) (n[1],1) Z [ 1 ] = W [ 1 ] X + b [ 1 ] Z^{[1]} = W^{[1]} X + b^{[1]} Z[1]=W[1]X+b[1] ( n [ 1 ] , 209 ) (n^{[1]},209) (n[1],209)
第2层 ( n [ 2 ] , n [ 1 ] ) (n^{[2]}, n^{[1]}) (n[2],n[1]) ( n [ 2 ] , 1 ) (n^{[2]},1) (n[2],1) Z [ 2 ] = W [ 2 ] A [ 1 ] + b [ 2 ] Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]} Z[2]=W[2]A[1]+b[2] ( n [ 2 ] , 209 ) (n^{[2]}, 209) (n[2],209)
? \vdots ? ? \vdots ? ? \vdots ? ? \vdots ? ? \vdots ?
第L-1层 ( n [ L ? 1 ] , n [ L ? 2 ] ) (n^{[L-1]}, n^{[L-2]}) (n[L?1],n[L?2]) ( n [ L ? 1 ] , 1 ) (n^{[L-1]}, 1) (n[L?1],1) Z [ L ? 1 ] = W [ L ? 1 ] A [ L ? 2 ] + b [ L ? 1 ] Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]} Z[L?1]=W[L?1]A[L?2]+b[L?1] ( n [ L ? 1 ] , 209 ) (n^{[L-1]}, 209) (n[L?1],209)
第L层 ( n [ L ] , n [ L ? 1 ] ) (n^{[L]}, n^{[L-1]}) (n[L],n[L?1]) ( n [ L ] , 1 ) (n^{[L]}, 1) (n[L],1) Z [ L ] = W [ L ] A [ L ? 1 ] + b [ L ] Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]} Z[L]=W[L]A[L?1]+b[L] ( n [ L ] , 209 ) (n^{[L]}, 209) (n[L],209)

一、原理

1.1 初始化参数

W [ l ] W^{[l]} W[l] b [ l ] b^{[l]} b[l]

1.2 前向传播

1.2.1 前向传播的线性部分 W X + b WX + b WX+b

W = [ j k l m n o p q r ]        X = [ a b c d e f g h i ]        b = [ s t u ] W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u \end{bmatrix} W=???jmp?knq?lor????X=???adg?beh?cfi????b=???stu????
W X + b = [ ( j a + k d + l g ) + s ( j b + k e + l h ) + s ( j c + k f + l i ) + s ( m a + n d + o g ) + t ( m b + n e + o h ) + t ( m c + n f + o i ) + t ( p a + q d + r g ) + u ( p b + q e + r h ) + u ( p c + q f + r i ) + u ] WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix} WX+b=???(ja+kd+lg)+s(ma+nd+og)+t(pa+qd+rg)+u?(jb+ke+lh)+s(mb+ne+oh)+t(pb+qe+rh)+u?(jc+kf+li)+s(mc+nf+oi)+t(pc+qf+ri)+u????
Z [ l ] = W [ l ] A [ l ? 1 ] + b [ l ] Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]} Z[l]=W[l]A[l?1]+b[l]其中 A [ 0 ] = X A^{[0]} = X A[0]=X

1.2.2 前向传播的线激活函数部分公式

激活函数 Sigmoid σ ( Z ) = σ ( W A + b ) = 1 1 + e ? ( W A + b ) \sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}} σ(Z)=σ(WA+b)=1+e?(WA+b)1?
激活函数 Relu A = R E L U ( Z ) = m a x ( 0 , Z ) A = RELU(Z) = max(0, Z) A=RELU(Z)=max(0,Z)
A [ l ] = g ( Z [ l ] ) = g ( W [ l ] A [ l ? 1 ] + b [ l ] ) A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]}) A[l]=g(Z[l])=g(W[l]A[l?1]+b[l])其中 g ( ) g() g()可以是 s i g m o i d ( ) sigmoid() sigmoid()也可以是 r e l u ( ) relu() relu()

1.3 计算误差

1.3.1 成本函数

J = ? 1 m ∑ i = 1 m ( y ( i ) log ? ( a [ L ] ( i ) ) + ( 1 ? y ( i ) ) log ? ( 1 ? a [ L ] ( i ) ) ) J =-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) J=?m1?i=1m?(y(i)log(a[L](i))+(1?y(i))log(1?a[L](i)))

1.4 反向传播

A [ L ] A^{[L]} A[L]它属于输出层,由 A [ L ] = σ ( Z [ L ] ) A^{[L]} = \sigma(Z^{[L]}) A[L]=σ(Z[L])得来, A [ L ] A^{[L]} A[L]相对于成本函数的导数 d A [ L ] = ? L ? A [ L ] dA^{[L]} = \frac{\partial \mathcal{L}}{\partial A^{[L]}} dA[L]=?A[L]?L?.

dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL

1.4.1 反向传播激活函数部分公式

d Z [ L ] = ? L ? Z [ L ] = d A [ L ] ? g ′ ( Z [ L ] ) dZ^{[L]} = \frac{\partial \mathcal{L} }{\partial Z^{[L]}}=dA^{[L]} * g'(Z^{[L]}) dZ[L]=?Z[L]?L?=dA[L]?g(Z[L])其中g’()是激活函数的导数

1.4.1 反向传播线性部分公式

这里假设已经得到了 d Z [ l ] dZ^{[l]} dZ[l]:
d W [ L ] = ? L ? W [ l ] = 1 m d Z [ L ] A [ L ? 1 ] T dW^{[L]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[L]} A^{[L-1] T} dW[L]=?W[l]?L?=m1?dZ[L]A[L?1]T d b [ L ] = ? L ? b [ L ] = 1 m ∑ i = 1 m d Z [ L ] ( i ) db^{[L]} = \frac{\partial \mathcal{L} }{\partial b^{[L]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[L](i)} db[L]=?b[L]?L?=m1?i=1m?dZ[L](i) d Z [ L ? 1 ] = W [ L ] T d Z [ L ] ? g ′ ( Z [ L ? 1 ] ) dZ^{[L-1]} =W^{[L] T} dZ^{[L]} * g'(Z^{[L-1]}) dZ[L?1]=W[L]TdZ[L]?g(Z[L?1]) d W [ L ? 1 ] = 1 m d Z [ L ? 1 ] A [ L ? 2 ] T dW^{[L-1]} =\frac{1}{m} dZ^{[L-1]} A^{[L-2] T} dW[L?1]=m1?dZ[L?1]A[L?2]T d b [ L ? 1 ] = 1 m ∑ i = 1 m d Z [ L ? 1 ] ( i ) db^{[L-1]} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[L-1](i)} db[L?1]=m1?i=1m?dZ[L?1](i) ? \vdots ? d Z [ 1 ] = W [ 2 ] T d Z [ 2 ] ? g ′ ( Z 1 ] ) dZ^{[1]} =W^{[2] T} dZ^{[2]} * g'(Z^{1]}) dZ[1]=W[2]TdZ[2]?g(Z1]) d W 1 ] = 1 m d Z [ 1 ] A [ 0 ] T dW^{1]} =\frac{1}{m} dZ^{[1]} A^{[0] T} dW1]=m1?dZ[1]A[0]T d b [ 1 ] = 1 m ∑ i = 1 m d Z [ 1 ] ( i ) db^{[1]} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[1](i)} db[1]=m1?i=1m?dZ[1](i)
其中 d A [ L ? 1 ] = ? L ? A [ l ? 1 ] = W [ L ] T d Z [ L ] dA^{[L-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[L] T} dZ^{[L]} dA[L?1]=?A[l?1]?L?=W[L]TdZ[L]

1.5 更新参数

W [ l ] = W [ l ] ? α d W [ l ] W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} W[l]=W[l]?α dW[l] b [ l ] = b [ l ] ? α d b [ l ] b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} b[l]=b[l]?α db[l]其中 α \alpha α是学习率


二、两层神经网络搭建

2.1软件包准备

  • numpy is the main package for scientific computing with Python.
  • matplotlib is a library to plot graphs in Python.
  • np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don’t change the seed.
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *  # 测试函数
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward  # 激活函数
import lr_utils
np.random.seed(1)  # 指定随机种子

2.2 初始化参数

  • 两层神经网络的模型结构是线性 -> Relu ->线性 ->sigmoid函数
  • 使用随机初始化权重Wnp.random.randn(shape)*0.01
  • 用0初始化偏置bnp.zero(shape)
    def initialize_parameters(n_x, n_h, n_y):'''此函数是为了初始化两层网络参数而使用的函数:param n_x:输入层节点数量:param n_h:隐藏层节点数量:param n_y:输出层节点数量:return:parameters:包含以下参数的字典W1:权重矩阵,维度为(n_h,n_x)W2:权重矩阵,维度为(n_y,n_h)b1:偏向量,维度为(n_h,1)b2:偏向量,维度为(n_y,1)'''W1 = np.random.randn(n_h, n_x) * 0.01W2 = np.random.randn(n_y, n_h) * 0.01b1 = np.zeros(shape=(n_h, 1))  # 注意np.zeros(shape)shape需要用括号包围起来b2 = np.zeros((n_y, 1))# 使用断言确保我的数据格式是正确的assert (W1.shape == (n_h, n_x))assert (W2.shape == (n_y, n_h))assert (b1.shape == (n_h, 1))assert (b2.shape == (n_y, 1))parameters = {
    "W1": W1,'W2': W2,'b1': b1,'b2': b2}return parameters

2.3 前向传播函数

前向传播有三个步骤

  • 计算线性部分
  • 线性部分 -> 激活部分,其中激活函数将会使用Relu或者sigmoid
  • 一般来说,对整个模型使用[L-1]次[linear - > relu],1次[linera - > sigmoid]

2.3.1 线性部分[Linear]

Z [ l ] = W [ l ] A [ l ? 1 ] + b [ l ] Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]} Z[l]=W[l]A[l?1]+b[l]

where A [ 0 ] = X A^{[0]} = X A[0]=X.

  • 两个矩阵乘用np.dot(W,A)
  • W.shape判断矩阵维度是否符合
def linear_forward(A, W, b):'''实现前向传播的线性部分:param A:来自上一层(或输入数据)的激活,维度为(上一层节点数,样本数):param W:权重矩阵,维度为(当前层的节点数,上一层的节点数):param b:偏向量,维度为(当前层的节点数,1):return:Z:激活函数的输入,也称为预激活参数cache:一个包含A,W,b的字典,储存它们以便后向传播的计算'''Z = np.dot(W, A) + bassert (Z.shape == (W.shape[0], A.shape[1]))cache = (A, W, b)  # cache是一个列表return Z, cache

2.3.2 线性激活部分【Linear -> Activation】

使用一下两个激活函数

  • Sigmoid: σ ( Z ) = σ ( W A + b ) = 1 1 + e ? ( W A + b ) \sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}} σ(Z)=σ(WA+b)=1+e?(WA+b)1?.

  • sigmoid函数导数
    σ ′ ( z ) = ( 1 1 + e ? z ) ′ = e ? z ( 1 + e ? z ) 2 = 1 + e ? z ? 1 ( 1 + e ? z ) 2 = 1 ( 1 + e ? z ) ( 1 ? 1 ( 1 + e ? z ) ) = f ( z ) ( 1 ? f ( z ) ) \begin{aligned} \sigma'(z) &= (\frac{1}{1+e^{-z}})' = \frac{e^{-z}}{(1+e^{-z})^{2}} = \frac{1+e^{-z}-1}{(1+e^{-z})^{2}} \\ &= \frac{1}{(1+e^{-z})}(1-\frac{1}{(1+e^{-z})}) \\ &= f(z)(1-f(z)) \\ \end{aligned} σ(z)?=(1+e?z1?)=(1+e?z)2e?z?=(1+e?z)21+e?z?1?=(1+e?z)1?(1?(1+e?z)1?)=f(z)(1?f(z))?

A, activation_cache = sigmoid(Z)
def sigmoid(Z):"""Implements the sigmoid activation in numpyArguments:Z -- numpy array of any shapeReturns:A -- output of sigmoid(z), same shape as Zcache -- returns Z as well, useful during backpropagation"""A = 1/(1+np.exp(-Z))cache = Zreturn A, cache
  • ReLU函数: A = R E L U ( Z ) = m a x ( 0 , Z ) A = RELU(Z) = max(0, Z) A=RELU(Z)=max(0,Z)
  • ReLU导数:

R e L U ′ ( Z ) = { 0 Z ≤ 0 1 0 &lt; Z ReLU&#x27;(Z)=\left\{ \begin{array}{rcl} 0 &amp; &amp; {Z \leq 0}\\ 1 &amp; &amp; {0 &lt; Z} \end{array} \right. ReLU(Z)={ 01??Z00<Z?

A, activation_cache = relu(Z)
def relu(Z):"""Implement the RELU function.Arguments:Z -- Output of the linear layer, of any shapeReturns:A -- Post-activation parameter, of the same shape as Zcache -- a python dictionary containing "A" ; stored for computing the backward pass efficiently"""A = np.maximum(0,Z)assert(A.shape == Z.shape)cache = Z return A, cache

以上两者的activation_cache都是 Z Z Z.

实现Linear -> Activation 这个步骤所使用的的公式是 A [ l ] = g ( Z [ l ] ) = g ( W [ l ] A [ l ? 1 ] + b [ l ] ) A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]}) A[l]=g(Z[l])=g(W[l]A[l?1]+b[l]) 这里的激活函数 “g” 可以是 sigmoid() 或者 relu().

def linear_activation_forward(A_prev, W, b, activation):'''实现linear->activation这一层的前向传播:param A_prev:来自上一层(或输入层)的激活,维度为(上一层节点数,样本数):param W:权重矩阵,numpy数组,维度为(当前层节点数量,上一层节点数量):param b:偏向量,numpy阵列,维度为(当前层节点数量,1):param activation:选择在此层中的激活函数,字符串类型,【sigmoid,relu】:return:A:激活函数的输出,也称为激活后的值cache:一个包含'linear_cache'和'activation_cache'的字典,我们需要存储它以有效地计算后向传播'''if activation == "sigmoid":Z, linear_cache = linear_forward(A_prev, W, b)  # linear_cache = (A, W, b)A, activation_cache = sigmoid(Z)  # activation_cache = Zelif activation == "relu":Z, linear_cache = linear_forward(A_prev, W, b)  # linear_cache = (A, W, b)A, activation_cache = relu(Z)  # activation_cache = Zassert (A.shape == (W.shape[0], A.shape[1]))cache = (linear_cache, activation_cache)  # (A,W,b,Z),其实是个一列表return A, cache

2.4 计算成本

计完成了两层模型的前向传播部分,我们需要计算成本(误差),以便确定它到底有没在学习,成本函数公式如下
J = ? 1 m ∑ i = 1 m ( y ( i ) log ? ( a [ L ] ( i ) ) + ( 1 ? y ( i ) ) log ? ( 1 ? a [ L ] ( i ) ) ) J=-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) J=?m1?i=1m?(y(i)log(a[L](i))+(1?y(i))log(1?a[L](i)))

def compute_cost(AL, Y):'''计算成本函数:param AL: 与标签预测相对应的概率向量,维度为(1,样本数):param Y:标签向量(例如:如果是猫则为1,不是猫则为0),维度为(1,样本数):return:cost:交叉熵成本'''m = Y.shape[1] # 样本数cost = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))cost = np.squeeze(cost)  # 让成本函数cost维度是所期望的,比如将[[17]]变成17assert (cost.shape == ())  # 一维return cost

2.5 反向传播

反向传播用于相对于损失函数的梯度,前向传播和后向传播的流程图
在这里插入图片描述
反向传播依然分为三步

  • Linear 线性部分后向计算
  • Linear -> activation后向计算,其中activation计算relu或者sigmoid的导数
  • 整个模型计算 [LINEAR -> RELU] × (L-1) -> LINEAR -> SIGMOID

2.5.1 反向传播的线性部分

对第 l l l层, 线性部分是: Z [ l ] = W [ l ] A [ l ? 1 ] + b [ l ] Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]} Z[l]=W[l]A[l?1]+b[l] ,在这之后就是激活函数。假设我们已经有了导数 d Z [ l ] = ? L ? Z [ l ] dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}} dZ[l]=?Z[l]?L?,我们想要得到 ( d W [ l ] , d b [ l ] d A [ l ? 1 ] ) (dW^{[l]}, db^{[l]} dA^{[l-1]}) (dW[l],db[l]dA[l?1]),那么可以用下面三个公式计算:
d W [ l ] = ? L ? W [ l ] = 1 m d Z [ l ] A [ l ? 1 ] T dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} dW[l]=?W[l]?L?=m1?dZ[l]A[l?1]T d b [ l ] = ? L ? b [ l ] = 1 m ∑ i = 1 m d Z [ l ] ( i ) db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)} db[l]=?b[l]?L?=m1?i=1m?dZ[l](i) d A [ l ? 1 ] = ? L ? A [ l ? 1 ] = W [ l ] T d Z [ l ] dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} dA[l?1]=?A[l?1]?L?=W[l]TdZ[l]

def linear_backward(dZ, cache):'''为单层实现反向传播的线性部分(第l层):param dZ: 相对于(当前l层的)线性输出的成本梯度:param cache:来自当前层前向传播的值的元组(A_prev,W,b):return:dA_prev:相对于激活(前一层l-1)的成本梯度,与A_prev维度相同dW:相对于W(当前层l)的成本函数梯度,与w维度相同db:相对于b(当前层l)的成本函数梯度,与b维度相同'''A_prev, W, b = cachem = A_prev.shape[1]  # 样本数dW = (1 / m) * np.dot(dZ, A_prev.T)db = (1 / m) * np.sum(dZ, axis=1, keepdims=True)  # 行向量求和,最后变成一个列向量dA_prev = np.dot(W.T, dZ)assert (dA_prev.shape == A_prev.shape)assert (dW.shape == W.shape)assert (db.shape == b.shape)return dA_prev, dW, db

2.5.2 反向传播的线性激活部分【linear -> activation backward】

为了实现线性激活后向传播,提供了两个后向函数

  • sigmoid_backward,实现sigmoid的反向传播dZ = sigmoid_backward(dA, activation_cache)
def sigmoid_backward(dA, cache):"""Implement the backward propagation for a single SIGMOID unit.Arguments:dA -- post-activation gradient, of any shapecache -- 'Z' where we store for computing backward propagation efficientlyReturns:dZ -- Gradient of the cost with respect to Z"""Z = caches = 1/(1+np.exp(-Z))dZ = dA * s * (1-s)assert (dZ.shape == Z.shape)return dZ
  • relu_backward,实现relu()的反向传播dZ = relu_backward(dA, activation_cache)
def relu_backward(dA, cache):"""Implement the backward propagation for a single RELU unit.Arguments:dA -- post-activation gradient, of any shapecache -- 'Z' where we store for computing backward propagation efficientlyReturns:dZ -- Gradient of the cost with respect to Z"""Z = cachedZ = np.array(dA, copy=True) # just converting dz to a correct object.# When z <= 0, you should set dz to 0 as well. dZ[Z <= 0] = 0assert (dZ.shape == Z.shape)return dZ

如果g(.)是激活函数,那么sigmoid_backward和relu_backward可以这样计算:
d Z [ l ] = d A [ l ] ? g ′ ( Z [ l ] ) dZ^{[l]} = dA^{[l]} * g&#x27;(Z^{[l]}) dZ[l]=dA[l]?g(Z[l])

def linear_activation_backward(dA, cache, activation):'''实现linear -> Activation 层的后向传播:param dA: 当前层激活后的梯度值:param cache: 我们存储用于有效计算反向传播的值的元组,值为(linear_cache(# linear_cache = (A, W, b)),activation_cache(# Z)):param activation:要在此层中使用的激活函数的名称,字符串类型,如["relu"|"sigmoid"]:return:dA_prev:相对于激活(前一层L-1)的成本梯度值,与A_prev的维度相同dW:相对于W(当前层l)的成本梯度值,与W维度相同db:相对于b(当前层l)的成本梯度值,与b维度相同'''linear_cache, activation_cache = cacheif activation == "relu":dZ = relu_backward(dA, activation_cache)  # activation_cache = Zif activation == "sigmoid":dZ = sigmoid_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)return dA_prev, dW, db

2.6 更新参数

前向反向传播都完成之后,那么就要更新 W [ l ] W^{[l]} W[l] b [ l ] b^{[l]} b[l] for l = 1 , 2 , . . . , L l = 1, 2, ..., L l=1,2,...,L. 的参数:
W [ l ] = W [ l ] ? α d W [ l ] W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} W[l]=W[l]?α dW[l] b [ l ] = b [ l ] ? α d b [ l ] b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} b[l]=b[l]?α db[l]

其中 α \alpha α 是学习率

def update_parameters(parameters, grads, learning_rate):'''使用梯度下降更新参数:param parameters:包含参数“W1”,“b1”,“W2”……“WL”,"bL"的字典:param grads:包含梯度值的字典,包含参数“dA1”,“dW1”,“db1”,“dW2”……“dWL”,"dbL",“dWL”:param learning_rate:学习参数:return::parameters:包含更新参数的字典parameters["W" + str(l)] = ...parameters["b" + str(l)] = ...'''L = len(parameters) // 2  # 整除for l in range(L):  # 0 -> L-1,这里l从0开始,所以下面就要加1.parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]return parameters

三、两层神经网络的应用

我们需要搭建一个两层神经网络识别图像这种是否是猫。
在这里插入图片描述
这个模型可以被总结为: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT

  • 维度为 (64,64,3)的图像被平整为 (12288,1) 大小的向量
  • 输入数据 [ x 0 , x 1 , . . . , x 12287 ] T [x_0,x_1,...,x_{12287}]^T [x0?,x1?,...,x12287?]T和大小为 ( n [ 1 ] , 12288 ) (n^{[1]}, 12288) (n[1],12288)的权重矩阵 W [ 1 ] W^{[1]} W[1]矩阵乘
  • 加上偏置 b [ 1 ] b^{[1]} b[1]之后用relu函数激活后得到: A [ 1 ] = [ a 0 [ 1 ] , a 1 [ 1 ] , . . . , a n [ 1 ] ? 1 [ 1 ] ] T A^{[1]}=[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T A[1]=[a0[1]?,a1[1]?,...,an[1]?1[1]?]T.
  • A [ 1 ] A^{[1]} A[1] 乘维度为(1, ( n [ 1 ] ) (n^{[1]}) (n[1])的权重矩阵 W [ 2 ] W^{[2]} W[2],再加上维度为(1,1)的 b [ 2 ] b^{[2]} b[2]偏置
  • 最后用sigmoid函数激活,如果结果大于0.5,则分类为cat,否则就为noncat

3.1 准备数据

我们现有一个数据集"data.h5",其中包含训练集“train_catvnoncat.h5”和测试集“test_catvnoncat.h5”

  • 标签值为0(noncat)或1(cat)的m_train个样本的训练集
  • 标签值为0(noncat)或1(cat)的m_test个样本的训练集
  • 每张图片的维度为(num_px, num_px, 3),其中的3代表RGB

3.1.1 导入数据

  • 训练样本数:209
  • 测试样本数:50
  • 每幅图像大小:(64,64,3)
  • train_x_orig shape:209,64,64,3)
  • train_y shape:(1, 209)
  • test_x_orig shape: (50, 64, 64, 3)
  • test_y shape: (1, 50)
def load_dataset():train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")  # 读取训练集数据train_set_x_orig = np.array(train_dataset["train_set_x"][:])  # 训练集特征 (m_train(209),num_px, num_px, 3)train_set_y_orig = np.array(train_dataset["train_set_y"][:])  # 训练集标签 (m_train(209),1)test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")  # 读取测试集数据test_set_x_orig = np.array(test_dataset["test_set_x"][:])  # 测试集特征 (m_test(50),num_px, num_px, 3)test_set_y_orig = np.array(test_dataset["test_set_y"][:])  # 测试集标签 (m_test(50),1)classes = np.array(test_dataset["list_classes"][:])  # 字符串numpy数组,包含'cat'和'noncat'train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))  # 维度变为(1,m_train(209))test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))  # 维度变为(1,m_test(50))return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes

3.1.2 标准化数据

通常,在将数据送入神经网络之前,我们需要改变图像维度并且将它们标准化。
在这里插入图片描述

  • train_x’s shape: (12288, 209)
  • test_x’s shape: (12288, 50)
  • 其中12288=64 * 64 * 3,刚好是一个图像成向量排布的大小
# 加载数据
train_x_orig, train_y, test_x_orig, test_y, classes = load_dataset()
# 改变训练样本和测试样本维度
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T  # -1表示维度可以通过数据进行判断,注意有转置
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# 标准化数据,使值介于0 — 1之间
train_x = train_x_flatten / 255
test_x = test_x_flatten / 255

3.2 实现二层神经网络模型

使用我们写过的函数组建这个二层神经网络的模型,这些函数的输入和返回如下:

def initialize_parameters(n_x, n_h, n_y):...# 此函数是为了初始化两层网络参数而使用的函数return parameters 
def linear_activation_forward(A_prev, W, b, activation):...# 实现linear->activation这一层的前向传播return A, cache
def compute_cost(AL, Y):...# 计算成本函数return cost
def linear_activation_backward(dA, cache, activation):...# 实现linear -> Activation 层的后向传播return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):...# 更新参数 Update parameters.return parameters

3.2.1 两层神经网络的模型

# 构建双层神经网络
def two_layer_model(X, Y, layers_dims, learning_rate=0.0075, num_iterations=3000, print_cost=False, isPlot=True):'''Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.:param X:输入数据input data, of shape (n_x, number of examples):param Y:标签向量true "label" vector(containing 0 if cat, 1 if non-cat), of shape (1, number of examples):param layers_dims:层数的向量dimensions of the layers (n_x, n_h, n_y):param learning_rate:学习率learning rate of the gradient descent update rule:param num_iterations:迭代的次数number of iterations of the optimization loop:param print_cost:是否打印If set to True, this will print the cost every 100 iterations:return:parameters:包含W1, W2, b1, and b2的字典向量a dictionary containing W1, W2, b1, and b2'''np.random.seed(1)grads = {
    }costs = []  # to keep track of the cost 追踪成本函数m = X.shape[1]  # number of examples 样本数(n_x, n_h, n_y) = layers_dims# 初始化两层网络参数parameters = initialize_parameters(n_x, n_h, n_y)# Get W1, b1, W2 and b2 from the dictionary parameters.W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# 开始迭代Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".A1, cache1 = linear_activation_forward(X, W1, b1, "relu") # 实现linear->activation这一层的前向传播A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")# 计算成本cost = compute_cost(A2, Y)# 初始化后向传播,得到dA2dA2 = -(np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid") # 实现linear -> Activation 层的后向传播dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2grads['dW1'] = dW1grads['db1'] = db1grads['dW2'] = dW2grads['db2'] = db2# 更新参数 Update parameters.parameters = update_parameters(parameters, grads, learning_rate)# 重新获得参数 Retrieve W1, b1, W2, b2 from parametersW1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Print the cost every 100 training exampleif print_cost and i % 100 == 0:print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))if print_cost and i % 100 == 0:costs.append(cost)# 迭代完成,根据条件绘制图像if isPlot:plt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()# 返回parametersreturn parameters

3.3 进行训练

# 数据加载完成,开始进行二层网络的训练
n_x = 12288
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
parameters = two_layer_model(train_x, train_y, layers_dims=(n_x, n_h, n_y), num_iterations=2500, print_cost=True,isPlot=True)

3.3.1 运行结果

Cost after iteration 0: 0.693049735659989
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.515830477276473
Cost after iteration 600: 0.47549013139433266:
:Cost after iteration 2000: 0.07439078704319078
Cost after iteration 2100: 0.06630748132267926
Cost after iteration 2200: 0.059193295010381654
Cost after iteration 2300: 0.05336140348560552
Cost after iteration 2400: 0.04855478562877014

在这里插入图片描述

3.3.2 结果分析

在以上代码运行完成之后,我们的训练模型就算是训练好了,但实际效果如何还需要检验。

  • 首先我们要用这个模型对训练集进行一次预测,查看模型对训练集的吻合程度
  • 对测试集进行预测,查看准确率

预测函数如下:

def predict(X, y, parameters):"""该函数用于预测二层神经网络的结果参数:X - 测试集y - 标签parameters - 训练模型的参数返回:p - 给定数据集X的预测"""m = X.shape[1]n = len(parameters) // 2  # 神经网络的层数p = np.zeros((1, m))W1 = parameters['W1']b1 = parameters['b1']W2 = parameters['W2']b2 = parameters['b2']A1, cache1 = linear_activation_forward(X, W1, b1, "relu")  # 实现linear->activation这一层的前向传播A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")# 根据参数前向传播probas = A2for i in range(0, probas.shape[1]):  # range =(0,m_train)if probas[0, i] > 0.5:p[0, i] = 1else:p[0, i] = 0print("准确度为: " + str(float(np.sum((p == y)) / m)))return p

3.4 进行预测

pred_train = predict(train_x, train_y, parameters)  # 训练集
pred_test = predict(test_x, test_y, parameters)  # 测试集

3.4.1 预测结果

准确度为: 1.0
准确度为: 0.72

四、 完整代码

import numpy as np
import h5py
import matplotlib.pyplot as pltnp.random.seed(1)  # 指定随机种子def initialize_parameters(n_x, n_h, n_y):'''此函数是为了初始化两层网络参数而使用的函数:param n_x:输入层节点数量:param n_h:隐藏层节点数量:param n_y:输出层节点数量:return:parameters:包含以下参数的字典W1:权重矩阵,维度为(n_h,n_x)W2:权重矩阵,维度为(n_y,n_h)b1:偏向量,维度为(n_h,1)b2:偏向量,维度为(n_y,1)'''W1 = np.random.randn(n_h, n_x) * 0.01W2 = np.random.randn(n_y, n_h) * 0.01b1 = np.zeros(shape=(n_h, 1))  # 注意np.zeros(shape)shape需要用括号包围起来b2 = np.zeros((n_y, 1))# 使用断言确保我的数据格式是正确的assert (W1.shape == (n_h, n_x))assert (W2.shape == (n_y, n_h))assert (b1.shape == (n_h, 1))assert (b2.shape == (n_y, 1))parameters = {
    "W1": W1,'W2': W2,'b1': b1,'b2': b2}return parametersdef linear_forward(A, W, b):'''实现前向传播的线性部分:param A:来自上一层(或输入数据)的激活,维度为(上一层节点数,样本数):param W:权重矩阵,维度为(当前层的节点数,上一层的节点数):param b:偏向量,维度为(当前层的节点数,1):return:Z:激活函数的输入,也称为预激活参数cache:一个包含A,W,b的字典,储存它们以便后向传播的计算'''Z = np.dot(W, A) + bassert (Z.shape == (W.shape[0], A.shape[1]))cache = (A, W, b)  # cache是一个列表return Z, cachedef sigmoid(Z):"""Implements the sigmoid activation in numpyArguments:Z -- numpy array of any shapeReturns:A -- output of sigmoid(z), same shape as Zcache -- returns Z as well, useful during backpropagation"""A = 1 / (1 + np.exp(-Z))cache = Zreturn A, cachedef relu(Z):"""Implement the RELU function.Arguments:Z -- Output of the linear layer, of any shapeReturns:A -- Post-activation parameter, of the same shape as Zcache -- a python dictionary containing "A" ; stored for computing the backward pass efficiently"""A = np.maximum(0, Z)assert (A.shape == Z.shape)cache = Zreturn A, cachedef linear_activation_forward(A_prev, W, b, activation):'''实现linear->activation这一层的前向传播:param A_prev:来自上一层(或输入层)的激活,维度为(上一层节点数,样本数):param W:权重矩阵,numpy数组,维度为(当前层节点数量,上一层节点数量):param b:偏向量,numpy阵列,维度为(当前层节点数量,1):param activation:选择在此层中的激活函数,字符串类型,【sigmoid,relu】:return:A:激活函数的输出,也称为激活后的值cache:一个包含'linear_cache'和'activation_cache'的字典,我们需要存储它以有效地计算后向传播'''if activation == "sigmoid":Z, linear_cache = linear_forward(A_prev, W, b)  # linear_cache = (A, W, b)A, activation_cache = sigmoid(Z)  # activation_cache = Zelif activation == "relu":Z, linear_cache = linear_forward(A_prev, W, b)  # linear_cache = (A, W, b)A, activation_cache = relu(Z)  # activation_cache = Zassert (A.shape == (W.shape[0], A.shape[1]))cache = (linear_cache, activation_cache)  # (A,W,b,Z),其实是个一列表return A, cachedef compute_cost(AL, Y):'''计算成本函数:param AL: 与标签预测相对应的概率向量,维度为(1,样本数):param Y:标签向量(例如:如果是猫则为1,不是猫则为0),维度为(1,样本数):return:cost:交叉熵成本'''m = Y.shape[1]  # 样本数cost = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))cost = np.squeeze(cost)  # 让成本函数cost维度是所期望的,比如将[[17]]变成17assert (cost.shape == ())  # 一维return costdef linear_backward(dZ, cache):'''为单层实现反向传播的线性部分(第l层):param dZ: 相对于(当前l层的)线性输出的成本梯度:param cache:来自当前层前向传播的值的元组(A_prev,W,b):return:dA_prev:相对于激活(前一层l-1)的成本梯度,与A_prev维度相同dW:相对于W(当前层l)的成本函数梯度,与w维度相同db:相对于b(当前层l)的成本函数梯度,与b维度相同'''A_prev, W, b = cachem = A_prev.shape[1]  # 样本数dW = (1 / m) * np.dot(dZ, A_prev.T)db = (1 / m) * np.sum(dZ, axis=1, keepdims=True)  # 行向量求和,最后变成一个列向量dA_prev = np.dot(W.T, dZ)assert (dA_prev.shape == A_prev.shape)assert (dW.shape == W.shape)assert (db.shape == b.shape)return dA_prev, dW, dbdef sigmoid_backward(dA, cache):"""Implement the backward propagation for a single SIGMOID unit.Arguments:dA -- post-activation gradient, of any shapecache -- 'Z' where we store for computing backward propagation efficientlyReturns:dZ -- Gradient of the cost with respect to Z"""Z = caches = 1 / (1 + np.exp(-Z))dZ = dA * s * (1 - s)assert (dZ.shape == Z.shape)return dZdef relu_backward(dA, cache):"""Implement the backward propagation for a single RELU unit.Arguments:dA -- post-activation gradient, of any shapecache -- 'Z' where we store for computing backward propagation efficientlyReturns:dZ -- Gradient of the cost with respect to Z"""Z = cachedZ = np.array(dA, copy=True)  # just converting dz to a correct object.# When z <= 0, you should set dz to 0 as well.dZ[Z <= 0] = 0assert (dZ.shape == Z.shape)return dZdef linear_activation_backward(dA, cache, activation):'''实现linear -> Activation 层的后向传播:param dA: 当前层激活后的梯度值:param cache: 我们存储用于有效计算反向传播的值的元组,值为(linear_cache(# linear_cache = (A, W, b)),activation_cache(# Z)):param activation:要在此层中使用的激活函数的名称,字符串类型,如["relu"|"sigmoid"]:return:dA_prev:相对于激活(前一层L-1)的成本梯度值,与A_prev的维度相同dW:相对于W(当前层l)的成本梯度值,与W维度相同db:相对于b(当前层l)的成本梯度值,与b维度相同'''linear_cache, activation_cache = cacheif activation == "relu":dZ = relu_backward(dA, activation_cache)  # activation_cache = Zif activation == "sigmoid":dZ = sigmoid_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)return dA_prev, dW, dbdef update_parameters(parameters, grads, learning_rate):'''使用梯度下降更新参数:param parameters:包含参数“W1”,“b1”,“W2”……“WL”,"bL"的字典:param grads:包含梯度值的字典,包含参数“dA1”,“dW1”,“db1”,“dW2”……“dWL”,"dbL",“dWL”:param learning_rate:学习参数:return::parameters:包含更新参数的字典parameters["W" + str(l)] = ...parameters["b" + str(l)] = ...'''L = len(parameters) // 2  # 整除for l in range(L):  # 0 -> L-1,这里l从0开始,所以下面就要加1.parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]return parametersdef load_dataset():train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")  # 读取训练集数据train_set_x_orig = np.array(train_dataset["train_set_x"][:])  # 训练集特征 (m_train(209),num_px, num_px, 3)train_set_y_orig = np.array(train_dataset["train_set_y"][:])  # 训练集标签 (m_train(209),1)test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")  # 读取测试集数据test_set_x_orig = np.array(test_dataset["test_set_x"][:])  # 测试集特征 (m_test(50),num_px, num_px, 3)test_set_y_orig = np.array(test_dataset["test_set_y"][:])  # 测试集标签 (m_test(50),1)classes = np.array(test_dataset["list_classes"][:])  # 字符串numpy数组,包含'cat'和'noncat'train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))  # 维度变为(1,m_train(209))test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))  # 维度变为(1,m_test(50))return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes# 构建双层神经网络
def two_layer_model(X, Y, layers_dims, learning_rate=0.0075, num_iterations=3000, print_cost=True, isPlot=True):'''Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.:param X:输入数据input data, of shape (n_x, number of examples):param Y:标签向量true "label" vector(containing 0 if cat, 1 if non-cat), of shape (1, number of examples):param layers_dims:层数的向量dimensions of the layers (n_x, n_h, n_y):param learning_rate:学习率learning rate of the gradient descent update rule:param num_iterations:迭代的次数number of iterations of the optimization loop:param print_cost:是否打印If set to True, this will print the cost every 100 iterations:return:parameters:包含W1, W2, b1, and b2的字典向量a dictionary containing W1, W2, b1, and b2'''np.random.seed(1)grads = {
    }costs = []  # to keep track of the cost 追踪成本函数m = X.shape[1]  # number of examples 样本数(n_x, n_h, n_y) = layers_dims# 初始化两层网络参数parameters = initialize_parameters(n_x, n_h, n_y)# Get W1, b1, W2 and b2 from the dictionary parameters.W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# 开始迭代Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".A1, cache1 = linear_activation_forward(X, W1, b1, "relu")  # 实现linear->activation这一层的前向传播A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")# 计算成本cost = compute_cost(A2, Y)# 初始化后向传播,得到dA2dA2 = -(np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")  # 实现linear -> Activation 层的后向传播dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2grads['dW1'] = dW1grads['db1'] = db1grads['dW2'] = dW2grads['db2'] = db2# 更新参数 Update parameters.parameters = update_parameters(parameters, grads, learning_rate)# 重新获得参数 Retrieve W1, b1, W2, b2 from parametersW1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Print the cost every 100 training exampleif print_cost and i % 100 == 0:print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))if print_cost and i % 100 == 0:costs.append(cost)# 迭代完成,根据条件绘制图像if isPlot:plt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()# 返回parametersreturn parametersdef predict(X, y, parameters):"""该函数用于预测二层神经网络的结果参数:X - 测试集y - 标签parameters - 训练模型的参数返回:p - 给定数据集X的预测"""m = X.shape[1]n = len(parameters) // 2  # 神经网络的层数p = np.zeros((1, m))W1 = parameters['W1']b1 = parameters['b1']W2 = parameters['W2']b2 = parameters['b2']A1, cache1 = linear_activation_forward(X, W1, b1, "relu")  # 实现linear->activation这一层的前向传播A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")# 根据参数前向传播probas = A2for i in range(0, probas.shape[1]):  # range =(0,m_train)if probas[0, i] > 0.5:p[0, i] = 1else:p[0, i] = 0print("准确度为: " + str(float(np.sum((p == y)) / m)))return p# 加载数据
train_x_orig, train_y, test_x_orig, test_y, classes = load_dataset()
# 改变训练样本和测试样本维度
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T  # -1表示维度可以通过数据进行判断,注意有转置
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# 标准化数据,使值介于0 — 1之间
train_x = train_x_flatten / 255
test_x = test_x_flatten / 255# 数据加载完成,开始进行二层网络的训练
n_x = 12288
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
parameters = two_layer_model(train_x, train_y, layers_dims=(n_x, n_h, n_y), num_iterations=3000, print_cost=True,isPlot=True)# 进行预测
pred_train = predict(train_x, train_y, parameters)  # 训练集
pred_test = predict(test_x, test_y, parameters)  # 测试集
  相关解决方案