当前位置: 代码迷 >> 综合 >> [图像识别]Pytorch搭建预训练VGG16实现10 Monkey Species Classification
  详细解决方案

[图像识别]Pytorch搭建预训练VGG16实现10 Monkey Species Classification

热度:98   发布时间:2024-01-20 01:52:46.0

[图像识别]Pytorch实现10 Monkey Species Classification

文章目录

  • [图像识别]Pytorch实现10 Monkey Species Classification
    • 1.数据集选取
      • 1.1下载地址
      • 1.2数据集描述
    • 2.自定义torch数据集类
      • 2.1自定义数据集需要实现哪些功能
      • 2.2自定义数据集可能会用到的库
      • 2.3自定义类中的数据预处理模块
      • 2.4实现自定义类中的方法:
    • 3.搭建预训练网络
      • 3.1预训练网络需要导入的库
      • 3.2导入预训练模型
      • 3.3搭建全连接层
    • 4.网络训练
      • 4.1网络训练需要的库
      • 4.2定义超参数
      • 4.3读取数据集和网络
      • 4.4可视化一个batch的图像
      • 4.5训练
    • 5.在验证集上预测
      • 5.1预测结果可视化
    • 6.可视化部分
      • 6.1可视化混淆矩阵
      • 6.2可视化预测结果的类激活映射热力图(CAM)
    • 7.完整代码(不包括可视化部分)
      • 7.1 mydataset.py
      • 7.2 mynet.py
      • 7.3 train.py
      • 7.4 test.py

1.数据集选取

本次识别数据集来源于Kaggle官网的10-monkey-species数据集

1.1下载地址

https://www.kaggle.com/slothkong/10-monkey-species

1.2数据集描述

数据集分为训练集和验证集两部分,总共提供了10类不同总类的猴子的jpg格式图像,即有10个标签,训练集每个类别下约有140张RGB图像,数据集和验证集均有10个文件夹,相同的标签存放在同一目录下,10个文件夹对应10个类别。(数据集作者提示使用迁移学习可能会获得较好的结果)

在这里插入图片描述

部分数据集展示(原数据集图像尺寸大小不一):

在这里插入图片描述

2.自定义torch数据集类

由于原始数据集已经将不同的类别分类到对应的文件夹下,因此我们可以使用torchvision.datasets下的ImageFolder模块读取数据集,但是更多情况下我们可能会遇到不同标签的图像混淆在一起的情况,因此为了灵活应对多样的原始数据集,本次我们自定义数据集类:

2.1自定义数据集需要实现哪些功能

'''
class FirstDataset(data.Dataset):#需要继承data.Datasetdef __init__(self):# TODO# 1.初始化文件路径或文件名列表。# 也就是在这个模块里,我们所做的工作就是初始化该类的一些基本参数。passdef __getitem__(self, index):# TODO# 1.从文件中读取一个数据(例如,使用numpy.fromfile,PIL.Image.open)。# 2.预处理数据(例如torchvision.Transform)。# 3.返回数据对(例如图像和标签)。# 这里需要注意的是,在第一步只读取一个数据,index是数据的索引(下标)passdef __len__(self):# 返回数据集的总大小(默认是0)。
'''

[参考博客] https://blog.csdn.net/sinat_42239797/article/details/90641659

2.2自定义数据集可能会用到的库

import os                                    # 读取文件目录
import torch.utils.data as data              # 自定义的数据集类都需要继承这个父类
from PIL import Image                        # 读取数据集
import numpy as np                           
import torch                                 
import torchvision.transforms as tf

2.3自定义类中的数据预处理模块

由于训练集中每个类别只有约140张数据,因此我们可以使用torchvisiontransforms模块进行数据预处理和数据增广:

# 训练集预处理
train_data_transforms = tf.Compose([tf.ToTensor(),              # 转化为张量并归一化至(0,1)(提高数据分布的集中程度)tf.RandomResizedCrop(224),  # 随机裁剪为224x224tf.RandomHorizontalFlip(),  # 依概率p = 0.5水平翻转tf.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]), # 图像标准化处理(使数据分布更为均衡) # 均值 标准差
])# 验证集预处理
val_data_transforms = tf.Compose([tf.ToTensor(),tf.Resize(256),tf.CenterCrop(224), # 中心裁剪tf.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]),
])

数据预处理方法:

归一化:基于像素的归一化是将像素的分布归一化至(0,1)之间。一张原始的图像,其特征分布可能是不均衡的,映射到特征空间中可能是一个细长的椭圆形,在梯度下降时,迭代过程十分缓慢,而将数据归一化后,图像上的各个特征对最终预测结果做出的贡献相同,因此特征映射到特征空间中会接近一个圆形,使用梯度下降就会较快的收敛。同时,归一化也能有效防止反向传播时梯度爆炸

标准化:通过求出数据的均值和标准差,进行数据的标准化,经过处理后的数据近似服从均值为0,标准差为1的标准正态分布,削弱了异常数据的影响

2.4实现自定义类中的方法:

# 新建一个数据集类,并且需要继承PyTorch中的data.Dataset父类
class MyDataset(data.Dataset):      # 默认构造函数,传入数据集类别(训练或测试),以及数据集路径#mode:训练集or测试集, dir:路径def __init__(self, dir, model):          self.imgPathList = []                  # 存放图片路径,注意是图片路径self.labelList = []                    # 存放图片对应猫或狗的标签(根据所在文件夹划分)self.dataSize = 0                      # 记录数据集大小self.labelsNum = 0                     # 数据集标签数self.transform = train_data_transforms # 转换关系self.model = modelfor label in os.listdir(dir):                               # 遍历dir文件夹for file in os.listdir(dir +'/'+ label):self.imgPathList.append(dir +'/'+ label +'/'+ file) # 将图片路径和文件名添加至image listself.dataSize += 1                                  # 数据集增1self.labelList.append(self.labelsNum)               # 如图片为狗,label为1,注意:list_img和list_label中的内容是一一配对的self.labelsNum += 1# 重载data.Dataset父类方法,获取数据集中数据内容def __getitem__(self, item):            img = Image.open(self.imgPathList[item])         # 打开图片img = np.array(img)                              # 数据转换成numpy数组形式label = self.labelList[item]                     # 获取image对应的labelshape = img.shape                                #image尺寸if (self.model == 'train'):return self.transform(img), torch.LongTensor([label]), shape    # 将image和label转换成Tensor形式,进行预处理并返回else:self.transform = val_data_transformsreturn self.transform(img), torch.LongTensor([label]), shape# 返回数据集大小def __len__(self):return self.dataSize

3.搭建预训练网络

本次识别将使用torchvision.models模块中的vgg16预训练网络(特征提取层)+自定义全连接层作为我们网络的基础框架

为什么可以使用预训练网络(迁移学习):

迁移学习又被称为归纳迁移,即通过使用一个适用于不同但是存在相关性的任务模型,以一种有利的方式缩小可能模型的搜索范围。基于本次的识别任务和ImageNet的识别任务有相似之处,(都是目标识别,并且识别对象相当于ImageNet数据集的一个子集),因此我们选取vgg16作为预训练模型。

3.1预训练网络需要导入的库

import torch
import torch.nn as nn
import torchvision.models as models
from torchvision import utils
import torch.utils.data as Data
import numpy as np
import torch.optim as optim

3.2导入预训练模型

vgg网络提出于2014年,在ImageNet图像分类与定位挑战赛上取得了优异成绩,原网络的预测目标结果有1000个类别,但是我们只有10个类别,因此我们重新搭建网络的全连接层部分,使得输出结果为10个类别,只选取vgg16的特征提取层作为我们的预训练模型(红框部分):

在这里插入图片描述

#导入预训练模型
vgg16 = models.vgg16(pretrained = True)
vgg = vgg16.features # 获取vgg16的特征提取层(去除全连接层的卷积模块)# requires_grad_(False)冻结vgg16的所有网络层参数,不更新
for param in vgg.parameters():param.requires_grad_(False)

3.3搭建全连接层

class MyNet(nn.Module):def __init__(self):super(MyNet,self).__init__()# 预训练的VGG16网络的features layers:self.vgg = vgg#自定义全连接层:self.classifier = nn.Sequential(nn.Linear(25088, 1024),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(1024, 256),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(256, 10),)# 前向传播def forward(self, x):x = self.vgg(x)x = x.view(x.size(0), -1)output = self.classifier(x)return output

4.网络训练

4.1网络训练需要的库

import torch
import numpy as np
import torch.nn as nn
from torch.optim import Adam
from torchvision import models
import matplotlib.pyplot as plt
import torch.utils.data as Data
from torchvision import transforms as tf#由于我们的数据集和网络都封装在一个类中,因此我们需要导入我们自定义的类:
from mynet import MyNet          # 网络
from mydataset import MyDataset  # 数据集

4.2定义超参数

#超参数:
BATCHSIZE = 36 
EPOCH = 2       
LR = 5e-4

4.3读取数据集和网络

# 读取训练集
train_data_dir = "./training/training"
train_data = MyDataset(train_data_dir,'train')
train_data_loader = Data.DataLoader(train_data, batch_size = BATCHSIZE, shuffle = True)# 读取验证集:
val_data_dir = "./validation/validation/"
val_data = MyDataset(val_data_dir, 'validation')
val_data_loader = Data.DataLoader(val_data, batch_size = BATCHSIZE, shuffle = True)print("train dates num:", train_data.__len__())
print("validation dates num:", val_data.__len__())#读取网络
MyNet = MyNet()
#通过读取保存的参数可以在先前的基础上继续训练
#MyNet.load_state_dict(torch.load('MyNet.pkl'))
print(MyNet)optimizer = torch.optim.Adam(MyNet.parameters(),lr = LR) #优化器
loss_func = nn.CrossEntropyLoss() # 损失函数

[Out]:
train dates num: 1097
validation dates num: 272
MyNet(
(vgg): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=2048, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=2048, out_features=512, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=512, out_features=10, bias=True)
)

4.4可视化一个batch的图像

for step, batch in enumerate(train_data_loader):b_x = batch[0]   # [batch_size, 3, 224, 224]b_y = batch[1].squeeze()   # [batch_size, 1]#只可视化一个batch的图像:if step > 0:breakmean = np.array([0.485, 0.456, 0.406])  # 图像均值std = np.array([[0.229, 0.224, 0.225]]) # 标准差plt.figure(figsize = (12,6))for img in np.arange(len(b_y)):plt.subplot(4,9,img+1)       # 子图#.transpose将数据格式由(channels,imagesize,imagesize)转化为(imagesize,imagesize,channels),进行格式的转换后方可进行显示。image = b_x[img,:,:,:].numpy().transpose((1,2,0))image = std * image + mean   # 图像去中心化,还原为原始数据image = np.clip(image, 0, 1) # 将图像像素限定在(0,1)之间,大于或小于的部分置0或1plt.imshow(image)plt.title(b_y[img].data.numpy())plt.axis("off")plt.subplots_adjust(hspace = 0.3)plt.show()

4.5训练

for epoch in range(EPOCH):train_loss_epoch = 0train_correct = 0val_loss_epoch = 0val_correct = 0

4.5.1训练部分:

		#训练MyNet.train()for step, batch in enumerate(train_data_loader):b_x = batch[0]   # [batch_size, 3, 224, 224]b_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(b_x)loss = loss_func(output, b_y)pre_lab = torch.argmax(output, 1)optimizer.zero_grad()loss.backward()optimizer.step()train_loss_epoch += loss.item() * b_x.size(0)train_correct_epoch = torch.sum(pre_lab == b_y.data)train_correct += train_correct_epochprint("Epoch:%d" % epoch, " | step:%d" % step, " | train loss:%.6f" % loss.item() , " | train accuracy:%d/36" % train_correct_epoch)#计算一个epoch的损失和精度:train_loss = train_loss_epoch / train_data.__len__()train_acc = train_correct.double() / train_data.__len__()print(" | train loss:%.6f" % train_loss , " | train accuracy:%.5f" % train_acc)

4.5.2验证部分:

        #验证:MyNet.eval()for step, batch in enumerate(val_data_loader):val_x = batch[0]   # [batch_size, 3, 224, 224]val_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(val_x)loss = loss_func(output, val_y)pre_lab = torch.argmax(output,1)val_loss_epoch += loss.item() * val_x.size(0)val_correct_epoch = torch.sum(pre_lab == val_y.data)val_correct += val_correct_epoch#计算一个epoch的损失和精度:val_loss = val_loss_epoch / val_data.__len__()val_acc = val_correct.double() / val_data.__len__()print("Epoch:%d" % epoch, " | step:%d" % step, " | validation loss:%.6f" % loss.item() , " | validation accuracy:%d/36" % val_correct_epoch)#计算一个epoch的损失和精度:train_loss = train_loss_epoch / train_data.__len__()train_acc = train_correct.double() / train_data.__len__()print(" | validation loss:%.6f" % val_loss , " | validation accuracy:%.5f" % val_acc)

4.5.3保存训练的网络参数:

torch.save(MyNet.state_dict(), 'MyNet.pkl')

[Out] :

… …
Epoch:1 | step:27 | train loss:0.138348 | train accuracy:34/36
Epoch:1 | step:28 | train loss:0.457851 | train accuracy:31/36
Epoch:1 | step:29 | train loss:0.046097 | train accuracy:36/36
… …
| train loss:0.163423 | train accuracy:0.94622
… …
Epoch:1 | step:4 | validation loss:0.117674 | validation accuracy:35/36
Epoch:1 | step:5 | validation loss:0.044571 | validation accuracy:35/36
Epoch:1 | step:6 | validation loss:0.029356 | validation accuracy:35/36
… …
| validation loss:0.043614 | validation accuracy:0.98529
[Finished in 827.3s]

最终网络在训练集上的精度达到了约94%,验证集上的精度达到了约98%

可以发现验证集的精度高于训练集的精度,原因有以下几点:

  1. 训练集的数据做了数据增广(随机裁剪,水平翻转);使得训练集分布产生了变化,对比之下验证集更符合数据的分布,网络对于一般数据的泛化能力更强。
  2. Dropout让训练集变成弱分类集合,而验证集没有dropout,即将所有的弱分类器集合组合在一起。因此预测的精度高于训练集。
  3. 在训练的过程中预测存在滞后性,验证集则是在模型训练的最后一个batch之后,网络在训练的累加下精度提高。

5.在验证集上预测

import numpy as np
import torch
import torch.nn as nn
from torch.optim import SGD, Adam
import torch.utils.data as Data
from torchvision import modelsfrom mynet import MyNet
from mydataset import MyDataset#plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['SimHei']# 超参数:
BATCHSIZE = 1
EPOCH = 1# 读取网络:
MyNet = MyNet()
MyNet.load_state_dict(torch.load('MyNet.pkl'))# 使用自定义数据集类读取数据集:
#./validation/validation ./training/training
# 读取验证集:
val_data_dir = "./validation/validation"
val_data = MyDataset(val_data_dir,'test')
val_data_loader = Data.DataLoader(val_data, batch_size = BATCHSIZE, shuffle = True)
print("validation dates num:", val_data.__len__())
# 标签
label = ['0: 鬃毛吼猴','1: 赤猴','2: 白秃猴','3: 日本猕猴','4: 倭狨','5: 白头卷尾猴','6: 银狨','7: 松鼠猴','8: 黑夜猴','9: 印度乌叶猴']val_y_list = []
pre_lab_list = []for epoch in range(EPOCH):MyNet.eval()for step, batch in enumerate(val_data_loader):val_x = batch[0]   # [batch_size, 3, 224, 224]val_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(val_x)pre_lab = torch.argmax(output,1)# 用Softmax输出预测概率:probability = nn.Softmax(dim = 1)accuracy = probability(output)val_y_list.append(val_y)pre_lab_list.append(pre_lab)mean = np.array([0.485, 0.456, 0.406])  # 图像均值std = np.array([[0.229, 0.224, 0.225]]) # 标准差plt.figure()image = val_x[0,:,:,:].numpy().transpose((1,2,0))image = std * image + mean   # 图像去中心化image = np.clip(image, 0, 1) # 将图像像素限定在(0,1)之间plt.imshow(image)plt.title('预测结果'+label[pre_lab]+',概率为 %.4f ' % accuracy[0].data.numpy()[val_y]+'| 真实标签:%d'% val_y)plt.axis("off")plt.show()

5.1预测结果可视化

在这里插入图片描述

6.可视化部分

6.1可视化混淆矩阵

from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd#可视化混淆矩阵: 预测标签 真实标签
conf_mat = confusion_matrix(val_y_list, pre_lab_list)
df_cm = pd.DataFrame(conf_mat, index = label, columns = label)
heatmap = sns.heatmap(df_cm, annot = True, fmt = 'd', cmap = "hot")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation = 0, ha = 'right')
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation = 50, ha = 'right')
plt.ylabel('真实标签')
plt.xlabel('预测标签')
plt.show()

横坐标:预测标签 纵坐标:真实标签

(验证集太小,结果可能不直观)

在这里插入图片描述

6.2可视化预测结果的类激活映射热力图(CAM)

为了便于观察图像中哪些部位对分类结果的影响较大,可以输出图像的热图,(大致原理就是读取网络特征提取层(卷积)最后一层的梯度信息)

'''CAM.py'''
import cv2
import torch
import numpy as np
from torch import nn
from PIL import Image
from torchvision import models
import matplotlib.pyplot as plt
from torchvision import transforms as tf
from mydataset import MyDataset#显示中文
#plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['SimHei']# 超参数:
BATCHSIZE = 1
EPOCH = 1my_transforms = tf.Compose([tf.ToTensor(),tf.Resize(256),tf.CenterCrop(224),tf.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]),
])# 标签
label = ['0: 鬃毛吼猴','1: 赤猴','2: 白秃猴','3: 日本猕猴','4: 倭狨','5: 白头卷尾猴','6: 银狨','7: 松鼠猴','8: 黑夜猴','9: 印度乌叶猴']#导入预训练模型
vgg16 = models.vgg16(pretrained = True)
vgg = vgg16.features # 获取vgg16的特征提取层(去除全连接层的卷积模块)class MyNet(nn.Module):def __init__(self):super(MyNet, self).__init__()#预训练self.vgg = vggself.classifier = nn.Sequential(nn.Linear(25088, 2048),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(2048, 512),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(512, 10),)self.gradients = None## 获取梯度钩子函数def activations_hook(self, grad):self.gradients = graddef forward(self, x):x = self.vgg(x)# 注册钩子h = x.register_hook(self.activations_hook)x = x.view(x.size(0), -1)x = self.classifier(x)return xdef get_activations_gradient(self):return self.gradientsdef get_activations(self, x):return self.vgg(x)vggcam = MyNet()
vggcam.load_state_dict(torch.load('MyNet.pkl'))
# 设置网络模式
vggcam.eval()#读取单张数据:
img_path = './validation/validation/n5/n518.jpg'
img = Image.open(img_path)
val_x = my_transforms(img)
val_x = torch.unsqueeze(val_x, dim = 0)#计算网络对图像的预测值:
output = vggcam(val_x)
pre_lab = torch.argmax(output,1)#将预测概率最高的那一个分类器的梯度反向传播
output[:,pre_lab.data.numpy()[0]].backward()
#获取模型梯度
gradients = vggcam.get_activations_gradient()
#计算梯度相应通道的均值
mean_gradients = torch.mean(gradients, dim = [0,2,3])
#获取图像在相应卷积层输出的卷积特征
activations = vggcam.get_activations(val_x).detach()
#每个通道乘相应的梯度均值
for i in range (len(mean_gradients)):activations[:,i,:,:] *= mean_gradients[i]
#计算所有通道的均值输出得到热力图
heatmap = torch.mean(activations, dim = 1).squeeze()
#使用relu做用于热力图
heatmap /= torch.max(heatmap)
heatmap = heatmap.numpy()
#可视化热力图
# plt.matshow(heatmap)
# plt.show()#展示原始图片:
mean = np.array([0.485, 0.456, 0.406])  # 图像均值
std = np.array([[0.229, 0.224, 0.225]]) # 标准差
plt.figure()img = val_x[0,:,:,:].numpy().transpose((1,2,0))
img = std * img + mean   # 图像去中心化
img = np.clip(img, 0, 1) # 将图像像素限定在(0,1)之间
plt.imshow(img)
plt.axis("off")
plt.show()#将CAM融合到原始图像上:#把热图resize到和原图尺寸相等
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
#热图的像素值分布在(0,1)之间,x255恢复为RGB图像
heatmap = np.clip(heatmap, 0, 1)
heatmap = np.uint8(heatmap * 255)heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
Grad_cam_img = heatmap * 0.6 + img * 255
Grad_cam_img = Grad_cam_img / Grad_cam_img.max()#可视化最终融合结果:
b,g,r = cv2.split(Grad_cam_img)
Grad_cam_img = cv2.merge([r,g,b]) # opencv的rgb转plt的rgb
plt.figure()
plt.imshow(Grad_cam_img)# 用Softmax输出预测概率:
probability = nn.Softmax(dim = 1)
accuracy = probability(output)label_num = int(img_path.split('/')[3][1])
plt.title('预测结果'+label[pre_lab]+',概率为 %.4f ' % accuracy[0].data.numpy()[label_num]+'| 真实标签:%d' % label_num)
plt.axis("off")
plt.show()

识别结果热图:

在这里插入图片描述

7.完整代码(不包括可视化部分)

7.1 mydataset.py

import os                                    # 读取文件目录
import torch.utils.data as data              # 自定义的数据集类都需要继承这个父类
from PIL import Image                        # 读取数据集
import numpy as np                           
import torch                                 
import torchvision.transforms as tf'''数据预处理'''# 训练集预处理
train_data_transforms = tf.Compose([tf.ToTensor(),              # 转化为张量并归一化至(0,1)(提高数据分布的集中程度)tf.RandomResizedCrop(224),  # 随机裁剪为224x224tf.RandomHorizontalFlip(),  # 依概率p = 0.5水平翻转tf.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]), # 图像标准化处理(使数据分布更为均衡,) ])# 验证集预处理
val_data_transforms = tf.Compose([tf.ToTensor(),tf.Resize(256),tf.CenterCrop(224),tf.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]),
])'''自定义数据集类'''# 新建一个数据集类,并且需要继承PyTorch中的data.Dataset父类
class MyDataset(data.Dataset):      # 默认构造函数,传入数据集类别(训练或测试),以及数据集路径#mode:训练集or测试集, dir:路径def __init__(self, dir, model):          self.imgPathList = []                  # 存放图片路径,注意是图片路径self.labelList = []                    # 存放图片对应猫或狗的标签(根据所在文件夹划分)self.dataSize = 0                      # 记录数据集大小self.labelsNum = 0                     # 数据集标签数self.transform = train_data_transforms # 转换关系self.model = modelfor label in os.listdir(dir):                               # 遍历dir文件夹for file in os.listdir(dir +'/'+ label):self.imgPathList.append(dir +'/'+ label +'/'+ file) # 将图片路径和文件名添加至image listself.dataSize += 1                                  # 数据集增1self.labelList.append(self.labelsNum)               # 图片为狗,label为1,注意:list_img和list_label中的内容是一一配对的self.labelsNum += 1# 重载data.Dataset父类方法,获取数据集中数据内容def __getitem__(self, item):            img = Image.open(self.imgPathList[item])         # 打开图片img = np.array(img)                              # 数据转换成numpy数组形式label = self.labelList[item]                     # 获取image对应的labelshape = img.shape                                #image尺寸if (self.model == 'train'):return self.transform(img), torch.LongTensor([label]), shape    # 将image和label转换成Tensor形式,进行预处理并返回else:self.transform = val_data_transformsreturn self.transform(img), torch.LongTensor([label]), shape# 返回数据集大小def __len__(self):return self.dataSize               

7.2 mynet.py

import torch
import torch.nn as nn
import torchvision.models as models
from torchvision import utils
import torch.utils.data as Data
import numpy as np
import torch.optim as optim#导入预训练模型
vgg16 = models.vgg16(pretrained = True)
vgg = vgg16.features # 获取vgg16的特征提取层(去除全连接层的卷积模块)# requires_grad_(False)冻结vgg16的所有网络层参数,不更新
for param in vgg.parameters():param.requires_grad_(False)class MyNet(nn.Module):def __init__(self):super(MyNet,self).__init__()# 预训练的VGG16网络的features layers:self.vgg = vgg#自定义全连接层:self.classifier = nn.Sequential(nn.Linear(25088, 2048),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(2048, 512),nn.ReLU(inplace = True),nn.Dropout(p = 0.5),nn.Linear(512, 10),)# 前向传播def forward(self, x):x = self.vgg(x)x = x.view(x.size(0), -1)output = self.classifier(x)return output

7.3 train.py

import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torch.optim import Adam
import torch.utils.data as Data
from torchvision import models
from torchvision import transforms as tffrom mynet import MyNet
from mydataset import MyDataset#超参数:
BATCHSIZE = 36
EPOCH = 2
LR = 5e-4# 读取训练集
train_data_dir = "./training/training"
train_data = MyDataset(train_data_dir,'train')
train_data_loader = Data.DataLoader(train_data, batch_size = BATCHSIZE, shuffle = True)# 读取验证集:
val_data_dir = "./validation/validation/"
val_data = MyDataset(val_data_dir, 'validation')
val_data_loader = Data.DataLoader(val_data, batch_size = BATCHSIZE, shuffle = True)print("train dates num:", train_data.__len__())
print("validation dates num:", val_data.__len__())MyNet = MyNet()
MyNet.load_state_dict(torch.load('MyNet.pkl'))
print(MyNet)for step, batch in enumerate(train_data_loader):b_x = batch[0]   # [batch_size, 3, 224, 224]b_y = batch[1].squeeze()   # [batch_size, 1]#只可视化一个batch的图像:if step > 0:breakmean = np.array([0.485, 0.456, 0.406])  # 图像均值std = np.array([[0.229, 0.224, 0.225]]) # 标准差plt.figure(figsize = (12,6))for img in np.arange(len(b_y)):plt.subplot(4,9,img+1)image = b_x[img,:,:,:].numpy().transpose((1,2,0))print(image)image = std * image + meanimage = np.clip(image, 0, 1)plt.imshow(image)plt.title(b_y[img].data.numpy())plt.axis("off")plt.subplots_adjust(hspace = 0.3)plt.show()# train_loss_list = []
# train_correct_list = []
# val_loss_list = []optimizer = torch.optim.Adam(MyNet.parameters(),lr = LR) #优化器
loss_func = nn.CrossEntropyLoss() # 损失函数for epoch in range(EPOCH):train_loss_epoch = 0train_correct = 0val_loss_epoch = 0val_correct = 0#训练MyNet.train()for step, batch in enumerate(train_data_loader):b_x = batch[0]   # [batch_size, 3, 224, 224]b_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(b_x)loss = loss_func(output, b_y)pre_lab = torch.argmax(output, 1)optimizer.zero_grad()loss.backward()optimizer.step()train_loss_epoch += loss.item() * b_x.size(0)train_correct_epoch = torch.sum(pre_lab == b_y.data)train_correct += train_correct_epoch# train_loss_list.append(loss.item() * b_x.size(0))# train_correct_list.append(train_correct_epoch)print("Epoch:%d" % epoch, " | step:%d" % step, " | train loss:%.6f" % loss.item() , " | train accuracy:%d/36" % train_correct_epoch)#计算一个epoch的损失和精度:train_loss = train_loss_epoch / train_data.__len__()train_acc = train_correct.double() / train_data.__len__()print(" | train loss:%.6f" % train_loss , " | train accuracy:%.5f" % train_acc)#验证:MyNet.eval()for step, batch in enumerate(val_data_loader):val_x = batch[0]   # [batch_size, 3, 224, 224]val_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(val_x)loss = loss_func(output, val_y)pre_lab = torch.argmax(output,1)val_loss_epoch += loss.item() * val_x.size(0)val_correct_epoch = torch.sum(pre_lab == val_y.data)val_correct += val_correct_epoch#计算一个epoch的损失和精度:val_loss = val_loss_epoch / val_data.__len__()val_acc = val_correct.double() / val_data.__len__()# val_loss_list.append(loss.item() * val_x.size(0))print("Epoch:%d" % epoch, " | step:%d" % step, " | validation loss:%.6f" % loss.item() , " | validation accuracy:%d/36" % val_correct_epoch)#计算一个epoch的损失和精度:train_loss = train_loss_epoch / train_data.__len__()train_acc = train_correct.double() / train_data.__len__()print(" | validation loss:%.6f" % val_loss , " | validation accuracy:%.5f" % val_acc)torch.save(MyNet.state_dict(), 'MyNet.pkl')# #可视化训练结果:
# plt.subplot(211)
# plt.plot(train_loss_list)
# plt.legend(["train loss","validation loss"])# plt.subplot(212)
# plt.plot(train_correct_list)
# plt.legend(["train correct"])# plt.show()

7.4 test.py

import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torch.optim import Adam
import torch.utils.data as Data
from torchvision import modelsfrom mynet import MyNet
from mydataset import MyDataset#plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['SimHei']# 超参数:
BATCHSIZE = 1
EPOCH = 1# 读取网络:
MyNet = MyNet()
MyNet.load_state_dict(torch.load('MyNet.pkl'))# 使用自定义数据集类读取数据集:
#./validation/validation ./training/training
# 读取验证集:
val_data_dir = "./validation/validation"
val_data = MyDataset(val_data_dir,'test')
val_data_loader = Data.DataLoader(val_data, batch_size = BATCHSIZE, shuffle = True)
print("validation dates num:", val_data.__len__())
# 标签
label = ['0: 鬃毛吼猴','1: 赤猴','2: 白秃猴','3: 日本猕猴','4: 倭狨','5: 白头卷尾猴','6: 银狨','7: 松鼠猴','8: 黑夜猴','9: 印度乌叶猴']for epoch in range(EPOCH):MyNet.eval()for step, batch in enumerate(val_data_loader):val_x = batch[0]   # [batch_size, 3, 224, 224]val_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(val_x)pre_lab = torch.argmax(output,1)# 用Softmax输出预测概率:probability = nn.Softmax(dim = 1)accuracy = probability(output)mean = np.array([0.485, 0.456, 0.406])  # 图像均值std = np.array([[0.229, 0.224, 0.225]]) # 标准差plt.figure()image = val_x[0,:,:,:].numpy().transpose((1,2,0))image = std * image + mean   # 图像去中心化image = np.clip(image, 0, 1) # 将图像像素限定在(0,1)之间plt.imshow(image)plt.title('预测结果'+label[pre_lab]+',概率为 %.4f ' % accuracy[0].data.numpy()[val_y]+'| 真实标签:%d'% val_y)plt.axis("off")plt.show()赤猴','2: 白秃猴','3: 日本猕猴','4: 倭狨','5: 白头卷尾猴','6: 银狨','7: 松鼠猴','8: 黑夜猴','9: 印度乌叶猴']for epoch in range(EPOCH):MyNet.eval()for step, batch in enumerate(val_data_loader):val_x = batch[0]   # [batch_size, 3, 224, 224]val_y = batch[1].squeeze()   # [batch_size, 1]output = MyNet(val_x)pre_lab = torch.argmax(output,1)# 用Softmax输出预测概率:probability = nn.Softmax(dim = 1)accuracy = probability(output)mean = np.array([0.485, 0.456, 0.406])  # 图像均值std = np.array([[0.229, 0.224, 0.225]]) # 标准差plt.figure()image = val_x[0,:,:,:].numpy().transpose((1,2,0))image = std * image + mean   # 图像去中心化image = np.clip(image, 0, 1) # 将图像像素限定在(0,1)之间plt.imshow(image)plt.title('预测结果'+label[pre_lab]+',概率为 %.4f ' % accuracy[0].data.numpy()[val_y]+'| 真实标签:%d'% val_y)plt.axis("off")plt.show()
  相关解决方案