当前位置: 代码迷 >> 综合 >> paddlepaddle(三)构建网络模型
  详细解决方案

paddlepaddle(三)构建网络模型

热度:29   发布时间:2023-11-30 12:18:05.0

目录

1.网络模块paddle.nn

2.Sequential 组网

3.内置模型


1.网络模块paddle.nn

from paddle import nn
import paddle
print(nn.__all__)###
['BatchNorm', 'GroupNorm', 'LayerNorm', 'SpectralNorm', 'BatchNorm1D', 'BatchNorm2D', 
'BatchNorm3D', 'InstanceNorm1D', 'InstanceNorm2D', 'InstanceNorm3D', 'SyncBatchNorm', 
'LocalResponseNorm', 'Embedding', 'Linear', 'Upsample', 'UpsamplingNearest2D', 
'UpsamplingBilinear2D', 'Pad1D', 'Pad2D', 'Pad3D', 'CosineSimilarity', 'Dropout', 
'Dropout2D', 'Dropout3D', 'Bilinear', 'AlphaDropout', 'Unfold', 'RNNCellBase','SimpleRNNCell', 'LSTMCell', 'GRUCell', 'RNN', 'BiRNN', 'SimpleRNN', 'LSTM', 'GRU', 
'dynamic_decode', 'MultiHeadAttention', 'Maxout', 'Softsign', 'Transformer', 'MSELoss','LogSigmoid', 'BeamSearchDecoder', 'ClipGradByNorm', 'ReLU', 'PairwiseDistance', 
'BCEWithLogitsLoss', 'SmoothL1Loss', 'MaxPool3D', 'AdaptiveMaxPool2D', 'Hardshrink', 
'Softplus', 'KLDivLoss', 'AvgPool2D', 'L1Loss', 'LeakyReLU', 'AvgPool1D', 
'AdaptiveAvgPool3D', 'AdaptiveMaxPool3D', 'NLLLoss', 'Conv1D', 'Sequential', 'Hardswish', 
'Conv1DTranspose', 'AdaptiveMaxPool1D', 'TransformerEncoder', 'Softmax', 'ParameterList','Conv2D', 'Softshrink', 'Hardtanh', 'TransformerDecoderLayer', 'CrossEntropyLoss', 'GELU','SELU', 'Silu', 'Conv2DTranspose', 'CTCLoss', 'ThresholdedReLU', 'AdaptiveAvgPool2D','MaxPool1D', 'Layer', 'TransformerDecoder', 'Conv3D', 'Tanh', 'Conv3DTranspose', 
'Flatten', 'AdaptiveAvgPool1D', 'Tanhshrink', 'HSigmoidLoss', 'PReLU', 
'TransformerEncoderLayer', 'AvgPool3D', 'MaxPool2D', 'MarginRankingLoss', 'LayerList', 
'ClipGradByValue', 'BCELoss', 'Hardsigmoid', 'ClipGradByGlobalNorm', 'LogSoftmax','Sigmoid', 'Swish', 'Mish', 'PixelShuffle', 'ELU', 'ReLU6', 'LayerDict']

分为Conv、Pool、Padding、Activation、Normlization、Recurrent NN、Transformer、Dropout、Loss几类。

2.Sequential 组网

import paddle
# Sequential形式组网
mnist = paddle.nn.Sequential(paddle.nn.Flatten(),paddle.nn.Linear(784, 512),paddle.nn.ReLU(),paddle.nn.Dropout(0.2),paddle.nn.Linear(512, 10)
)
print(mnist)####
Sequential((0): Flatten()(1): Linear(in_features=784, out_features=512, dtype=float32)(2): ReLU()(3): Dropout(p=0.2, axis=None, mode=upscale_in_train)(4): Linear(in_features=512, out_features=10, dtype=float32)
)

3.内置模型

paddle.vision.models 内置了许多经典的模型

print('飞桨框架内置模型:', paddle.vision.models.__all__)#####
飞桨框架内置模型: ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152','VGG', 'vgg11', 'vgg13', 'vgg16', 'vgg19', 'MobileNetV1', 'mobilenet_v1', 'MobileNetV2','mobilenet_v2', 'LeNet']

 paddle.summary()方法查看模型的结构与每一层输入、输出形状和参数量

lenet = paddle.vision.models.LeNet()
paddle.summary(lenet,  (64, 1, 28, 28))####
---------------------------------------------------------------------------Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================Conv2D-3      [[64, 1, 28, 28]]     [64, 6, 28, 28]          60       ReLU-4       [[64, 6, 28, 28]]     [64, 6, 28, 28]           0       MaxPool2D-3    [[64, 6, 28, 28]]     [64, 6, 14, 14]           0       Conv2D-4      [[64, 6, 14, 14]]     [64, 16, 10, 10]        2,416     ReLU-5       [[64, 16, 10, 10]]    [64, 16, 10, 10]          0       MaxPool2D-4    [[64, 16, 10, 10]]     [64, 16, 5, 5]           0       Linear-6         [[64, 400]]           [64, 120]           48,120     Linear-7         [[64, 120]]            [64, 84]           10,164     Linear-8          [[64, 84]]            [64, 10]             850      
===========================================================================
Total params: 61,610
Trainable params: 61,610
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.19
Forward/backward pass size (MB): 7.03
Params size (MB): 0.24
Estimated Total Size (MB): 7.46
---------------------------------------------------------------------------{'total_params': 61610, 'trainable_params': 61610}
  相关解决方案