当前位置: 代码迷 >> 综合 >> 2.6-tensorflow2-图像-卷积神经网络-Cifar10
  详细解决方案

2.6-tensorflow2-图像-卷积神经网络-Cifar10

热度:60   发布时间:2024-02-22 00:02:35.0

文章目录

    • 1.基础版

1.基础版

# 1.导入 TensorFlow
import tensorflow as tf
from tensorflow.keras import datasets,layers,models
import matplotlib.pyplot as plt# 2.下载并准备 CIFAR10 数据集
# CIFAR10 数据集包含 10 类,共 60000 张彩色图片,每类图片有 6000 张。
# 此数据集中 50000 个样例被作为训练集,剩余 10000 个样例作为测试集。类之间相互度立,不存在重叠的部分。
print("1.开始加载图片")
(train_images,train_labels),(test_images,test_labels) = datasets.cifar10.load_data()#可以尝试源码.load_data()中oringin路径改为自己的cifar数据集路径,origin = 'file:///Users/Administrator/.keras/datasets/cifar-10-python.tar.gz'(将gz文件放入路径下,路径格式按此)print("2.图片加载完毕,开始数据处理")
#将像素的值标准化到0-1
train_images,test_images = train_images/255.0,test_images/255.0
# 3.验证数据
# 我们将测试集的前 25 张图片和类名打印出来,来确保数据集被正确加载。
class_names = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']plt.figure(figsize=(10,10))
for i in range(25):plt.subplot(5,5,i+1)plt.xticks([])plt.yticks([])plt.grid(False)plt.imshow(train_images[i],cmap=plt.cm.binary)#由于cifar的标签是array,因此需要额外的索引indexplt.xlabel(class_names[train_labels[i][0]])plt.show()
print("图片验证完毕,开始构造CNN")
# 4.构造卷积神经网络模型
# 下方展示的 6 行代码声明了了一个常见卷积神经网络,由几个 Conv2D 和 MaxPooling2D 层组成。
# CNN 的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入 batch size。如果您不熟悉图像处理,颜色信息建议您使用 RGB 色彩模式,此模式下,color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入,CIFAR 数据集中的图片,形状是 (32, 32, 3)。您可以在声明第一层时将形状赋值给参数 input_shape 。
model =models.Sequential()
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
# 增加 Dense 层
# Dense 层等同于全连接 (Full Connected) 层。
# 在模型的最后,您将把卷积后的输出张量(本例中形状为 (4, 4, 64))传给一个或多个 Dense 层来完成分类。Dense 层的输入为向量(一维),但前面层的输出是3维的张量 (Tensor)。因此您需要将三维张量展开 (flatten) 到1维,之后再传入一个或多个 Dense 层。CIFAR 数据集有 10 个类,因此您最终的 Dense 层需要 10 个输出及一个 softmax 激活函数。
model.add(layers.Flatten())#拉平为1维
model.add(layers.Dense(64,activation='relu'))
model.add(layers.Dense(10))
print("4.我们声明的CNN结构是:")
model.summary()
# 5.编译并训练模型
print("开始编译并训练模型...")
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
history = model.fit(train_images,train_labels,epochs=10,validation_data=(test_images,test_labels))
# 6.评估模型
plt.plot(history.history['accuracy'],label='accuracy')
plt.plot(history.history['val_accuracy'],label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5,1])
plt.legend(loc='lower right')
plt.show()test_loss,test_acc = model.evaluate(test_images,test_labels,verbose=2)
print("6.test_accuracy=",test_acc)
1.开始加载图片
2.图片加载完毕,开始数据处理图片验证完毕,开始构造CNN
4.我们声明的CNN结构是:
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param # 
=================================================================
conv2d (Conv2D)              (None, 30, 30, 32)        896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 15, 15, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 13, 13, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 4, 4, 64)          36928     
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 64)                65600     
_________________________________________________________________
dense_1 (Dense)              (None, 10)                650       
=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________
开始编译并训练模型...
Epoch 1/10
1563/1563 [==============================] - 31s 20ms/step - loss: 1.5515 - accuracy: 0.4340 - val_loss: 1.3051 - val_accuracy: 0.5303
Epoch 2/10
1563/1563 [==============================] - 30s 19ms/step - loss: 1.1698 - accuracy: 0.5846 - val_loss: 1.1116 - val_accuracy: 0.6044
Epoch 3/10
1563/1563 [==============================] - 30s 19ms/step - loss: 1.0225 - accuracy: 0.6424 - val_loss: 1.0611 - val_accuracy: 0.6198
Epoch 4/10
1563/1563 [==============================] - 30s 19ms/step - loss: 0.9334 - accuracy: 0.6736 - val_loss: 0.9452 - val_accuracy: 0.6665
Epoch 5/10
1563/1563 [==============================] - 30s 19ms/step - loss: 0.8679 - accuracy: 0.6957 - val_loss: 0.9357 - val_accuracy: 0.6683
Epoch 6/10
1563/1563 [==============================] - 31s 20ms/step - loss: 0.8142 - accuracy: 0.7165 - val_loss: 0.9081 - val_accuracy: 0.6839
Epoch 7/10
1563/1563 [==============================] - 30s 20ms/step - loss: 0.7684 - accuracy: 0.7317 - val_loss: 0.8869 - val_accuracy: 0.6905
Epoch 8/10
1563/1563 [==============================] - 31s 20ms/step - loss: 0.7299 - accuracy: 0.7456 - val_loss: 0.8902 - val_accuracy: 0.6911
Epoch 9/10
1563/1563 [==============================] - 32s 20ms/step - loss: 0.6939 - accuracy: 0.7577 - val_loss: 0.8470 - val_accuracy: 0.7138
Epoch 10/10
1563/1563 [==============================] - 32s 21ms/step - loss: 0.6631 - accuracy: 0.7661 - val_loss: 0.8688 - val_accuracy: 0.7071
313/313 - 1s - loss: 0.8688 - accuracy: 0.7071
6.test_accuracy= 0.707099974155426Process finished with exit code 0

原文

  相关解决方案