深度學習100例-卷積神經網絡(CNN)天氣識別 | 第5天

1、前期工做

本文將採用CNN實現多雲、下雨、晴、日出四種天氣狀態的識別。較上篇文章,本文爲了增長模型的泛化能力,新增了Dropout層而且將最大池化層調整成了平均池化層。python

個人環境:git

  • 語言環境:Python3.6.5
  • 編譯器:jupyter notebook
  • 深度學習環境:TensorFlow2.4.1

推薦閱讀:緩存

來自專欄:《深度學習100例》微信

1. 設置GPU

若是使用的是CPU能夠忽略這步markdown

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]                                        #若是有多個GPU,僅使用第0個GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  #設置GPU顯存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")
複製代碼

2. 導入數據

import matplotlib.pyplot as plt
import os,PIL

# 設置隨機種子儘量使結果能夠重現
import numpy as np
np.random.seed(1)

# 設置隨機種子儘量使結果能夠重現
import tensorflow as tf
tf.random.set_seed(1)

from tensorflow import keras
from tensorflow.keras import layers,models

import pathlib
複製代碼
data_dir = "D:/jupyter notebook/DL-100-days/datasets/weather_photos/"

data_dir = pathlib.Path(data_dir)
複製代碼

3. 查看數據

數據集一共分爲cloudyrainshinesunrise四類,分別存放於weather_photos文件夾中以各自名字命名的子文件夾中。網絡

image_count = len(list(data_dir.glob('*/*.jpg')))

print("圖片總數爲:",image_count)
複製代碼
圖片總數爲: 1125
複製代碼
roses = list(data_dir.glob('sunrise/*.jpg'))
PIL.Image.open(str(roses[0]))
複製代碼

在這裏插入圖片描述

2、數據預處理

1. 加載數據

使用image_dataset_from_directory方法將磁盤中的數據加載到tf.data.Datasetdom

batch_size = 32
img_height = 180
img_width = 180
複製代碼
""" 關於image_dataset_from_directory()的詳細介紹能夠參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
複製代碼
Found 1125 files belonging to 4 classes.
Using 900 files for training.
複製代碼
""" 關於image_dataset_from_directory()的詳細介紹能夠參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
複製代碼
Found 1125 files belonging to 4 classes.
Using 225 files for validation.
複製代碼

咱們能夠經過class_names輸出數據集的標籤。標籤將按字母順序對應於目錄名稱。函數

class_names = train_ds.class_names
print(class_names)
複製代碼
['cloudy', 'rain', 'shine', 'sunrise']
複製代碼

2. 可視化數據

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
複製代碼

在這裏插入圖片描述

3. 再次檢查數據

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
複製代碼
(32, 180, 180, 3)
(32,)
複製代碼
  • Image_batch是形狀的張量(32,180,180,3)。這是一批形狀180x180x3的32張圖片(最後一維指的是彩色通道RGB)。
  • Label_batch是形狀(32,)的張量,這些標籤對應32張圖片

4. 配置數據集

prefetch()功能詳細介紹:CPU 正在準備數據時,加速器處於空閒狀態。相反,當加速器正在訓練模型時,CPU 處於空閒狀態。所以,訓練所用的時間是 CPU 預處理時間和加速器訓練時間的總和。prefetch()將訓練步驟的預處理和模型執行過程重疊到一塊兒。當加速器正在執行第 N 個訓練步時,CPU 正在準備第 N+1 步的數據。這樣作不只能夠最大限度地縮短訓練的單步用時(而不是總用時),並且能夠縮短提取和轉換數據所需的時間。若是不使用prefetch(),CPU 和 GPU/TPU 在大部分時間都處於空閒狀態:oop

在這裏插入圖片描述 使用prefetch()可顯著減小空閒時間: 在這裏插入圖片描述post

  • cache():將數據集緩存到內存當中,加速運行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
複製代碼

3、構建CNN網絡

卷積神經網絡(CNN)的輸入是張量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了圖像高度、寬度及顏色信息。不須要輸入batch size。color_channels 爲 (R,G,B) 分別對應 RGB 的三個顏色通道(color channel)。在此示例中,咱們的 CNN 輸入,fashion_mnist 數據集中的圖片,形狀是 (28, 28, 1)即灰度圖像。咱們須要在聲明第一層時將形狀賦值給參數input_shape

num_classes = 4

""" 關於卷積核的計算不懂的能夠參考文章:https://blog.csdn.net/qq_38251616/article/details/114278995 layers.Dropout(0.4) 做用是防止過擬合,提升模型的泛化能力。 在上一篇文章花朵識別中,訓練準確率與驗證準確率相差巨大就是因爲模型過擬合致使的 關於Dropout層的更多介紹能夠參考文章:https://mtyjkh.blog.csdn.net/article/details/115826689 """

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷積層1,卷積核3*3 
    layers.AveragePooling2D((2, 2)),               # 池化層1,2*2採樣
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷積層2,卷積核3*3
    layers.AveragePooling2D((2, 2)),               # 池化層2,2*2採樣
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷積層3,卷積核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten層,鏈接卷積層與全鏈接層
    layers.Dense(128, activation='relu'),   # 全鏈接層,特徵進一步提取
    layers.Dense(num_classes)               # 輸出層,輸出預期結果
])

model.summary()  # 打印網絡結構
複製代碼
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
rescaling (Rescaling)        (None, 180, 180, 3)       0         
_________________________________________________________________
conv2d (Conv2D)              (None, 178, 178, 16)      448       
_________________________________________________________________
average_pooling2d (AveragePo (None, 89, 89, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 87, 87, 32)        4640      
_________________________________________________________________
average_pooling2d_1 (Average (None, 43, 43, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 41, 41, 64)        18496     
_________________________________________________________________
dropout (Dropout)            (None, 41, 41, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 107584)            0         
_________________________________________________________________
dense (Dense)                (None, 128)               13770880  
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 645       
=================================================================
Total params: 13,795,109
Trainable params: 13,795,109
Non-trainable params: 0
_________________________________________________________________
複製代碼

4、編譯

在準備對模型進行訓練以前,還須要再對其進行一些設置。如下內容是在模型的編譯步驟中添加的:

  • 損失函數(loss):用於衡量模型在訓練期間的準確率。
  • 優化器(optimizer):決定模型如何根據其看到的數據和自身的損失函數進行更新。
  • 指標(metrics):用於監控訓練和測試步驟。如下示例使用了準確率,即被正確分類的圖像的比率。
# 設置優化器
opt = tf.keras.optimizers.Adam(learning_rate=0.001)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
複製代碼

5、訓練模型

epochs = 10

history = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=epochs
)
複製代碼
Epoch 1/10
29/29 [==============================] - 6s 58ms/step - loss: 1.5865 - accuracy: 0.4463 - val_loss: 0.5837 - val_accuracy: 0.7689
Epoch 2/10
29/29 [==============================] - 0s 12ms/step - loss: 0.5289 - accuracy: 0.8295 - val_loss: 0.5405 - val_accuracy: 0.8133
Epoch 3/10
29/29 [==============================] - 0s 12ms/step - loss: 0.2930 - accuracy: 0.8967 - val_loss: 0.5364 - val_accuracy: 0.8000
Epoch 4/10
29/29 [==============================] - 0s 12ms/step - loss: 0.2742 - accuracy: 0.9074 - val_loss: 0.4034 - val_accuracy: 0.8267
Epoch 5/10
29/29 [==============================] - 0s 11ms/step - loss: 0.1952 - accuracy: 0.9383 - val_loss: 0.3874 - val_accuracy: 0.8844
Epoch 6/10
29/29 [==============================] - 0s 11ms/step - loss: 0.1592 - accuracy: 0.9468 - val_loss: 0.3680 - val_accuracy: 0.8756
Epoch 7/10
29/29 [==============================] - 0s 12ms/step - loss: 0.0836 - accuracy: 0.9755 - val_loss: 0.3429 - val_accuracy: 0.8756
Epoch 8/10
29/29 [==============================] - 0s 12ms/step - loss: 0.0943 - accuracy: 0.9692 - val_loss: 0.3836 - val_accuracy: 0.9067
Epoch 9/10
29/29 [==============================] - 0s 12ms/step - loss: 0.0344 - accuracy: 0.9909 - val_loss: 0.3578 - val_accuracy: 0.9067
Epoch 10/10
29/29 [==============================] - 0s 11ms/step - loss: 0.0950 - accuracy: 0.9708 - val_loss: 0.4710 - val_accuracy: 0.8356
複製代碼

6、模型評估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
複製代碼

在這裏插入圖片描述

思考:1.最大池化與平均池化的區別是什麼呢?2.學習率是否是越大越好,優化器該如何設置呢?


推薦閱讀:

來自專欄:《深度學習100例》

微信公衆號【K同窗啊】中回覆【DL+5】可獲取數據。

相關文章
相關標籤/搜索