tensorflow訓練本身的數據集實現CNN圖像分類1

利用卷積神經網絡訓練圖像數據分爲如下幾個步驟git

  1. 讀取圖片文件
  2. 產生用於訓練的批次
  3. 定義訓練的模型(包括初始化參數,卷積、池化層等參數、網絡
  4. 訓練

讀取圖片文件

 1 def get_files(filename):
 2     class_train = []
 3     label_train = []
 4     for train_class in os.listdir(filename):
 5         for pic in os.listdir(filename+train_class):
 6             class_train.append(filename+train_class+'/'+pic)
 7             label_train.append(train_class)
 8     temp = np.array([class_train,label_train])
 9     temp = temp.transpose()
10     #shuffle the samples
11     np.random.shuffle(temp)
12     #after transpose, images is in dimension 0 and label in dimension 1
13     image_list = list(temp[:,0])
14     label_list = list(temp[:,1])
15     label_list = [int(i) for i in label_list]
16     #print(label_list)
17     return image_list,label_list

  這裏文件名做爲標籤,即類別(其數據類型要肯定,後面要轉爲tensor類型數據)。網絡

  而後將image和label轉爲list格式數據,由於後邊用到的的一些tensorflow函數接收的是list格式數據。app

產生用於訓練的批次

 1 def get_batches(image,label,resize_w,resize_h,batch_size,capacity):
 2     #convert the list of images and labels to tensor
 3     image = tf.cast(image,tf.string)
 4     label = tf.cast(label,tf.int64)
 5     queue = tf.train.slice_input_producer([image,label])
 6     label = queue[1]
 7     image_c = tf.read_file(queue[0])
 8     image = tf.image.decode_jpeg(image_c,channels = 3)
 9     #resize
10     image = tf.image.resize_image_with_crop_or_pad(image,resize_w,resize_h)
11     #(x - mean) / adjusted_stddev
12     image = tf.image.per_image_standardization(image)
13     
14     image_batch,label_batch = tf.train.batch([image,label],
15                                              batch_size = batch_size,
16                                              num_threads = 64,
17                                              capacity = capacity)
18     images_batch = tf.cast(image_batch,tf.float32)
19     labels_batch = tf.reshape(label_batch,[batch_size])
20     return images_batch,labels_batch

  首先使用tf.cast轉化爲tensorflow數據格式,使用tf.train.slice_input_producer實現一個輸入的隊列。dom

  label不須要處理,image存儲的是路徑,須要讀取爲圖片,接下來的幾步就是讀取路徑轉爲圖片,用於訓練。ide

  CNN對圖像大小是敏感的,第10行圖片resize處理爲大小一致,12行將其標準化,即減去全部圖片的均值,方便訓練。函數

  接下來使用tf.train.batch函數產生訓練的批次。spa

  最後將產生的批次作數據類型的轉換和shape的處理便可產生用於訓練的批次。code

3 定義訓練的模型

(1)訓練參數的定義及初始化

 

 1 def init_weights(shape):
 2     return tf.Variable(tf.random_normal(shape,stddev = 0.01))
 3 #init weights
 4 weights = {
 5     "w1":init_weights([3,3,3,16]),
 6     "w2":init_weights([3,3,16,128]),
 7     "w3":init_weights([3,3,128,256]),
 8     "w4":init_weights([4096,4096]),
 9     "wo":init_weights([4096,2])
10     }
11 
12 #init biases
13 biases = {
14     "b1":init_weights([16]),
15     "b2":init_weights([128]),
16     "b3":init_weights([256]),
17     "b4":init_weights([4096]),
18     "bo":init_weights([2])
19     }

 

  CNN的每層是y=wx+b的決策模型,卷積層產生特徵向量,根據這些特徵向量帶入x進行計算,所以,須要定義卷積層的初始化參數,包括權重和偏置。其中第8行的參數形狀後邊再解釋。orm

 (2)定義不一樣層的操做

 1 def conv2d(x,w,b):
 2     x = tf.nn.conv2d(x,w,strides = [1,1,1,1],padding = "SAME")
 3     x = tf.nn.bias_add(x,b)
 4     return tf.nn.relu(x)
 5 
 6 def pooling(x):
 7     return tf.nn.max_pool(x,ksize = [1,2,2,1],strides = [1,2,2,1],padding = "SAME")
 8 
 9 def norm(x,lsize = 4):
10     return tf.nn.lrn(x,depth_radius = lsize,bias = 1,alpha = 0.001/9.0,beta = 0.75)

  這裏只定義了三種層,即卷積層、池化層和正則化層blog

 (3)定義訓練模型

 1 def mmodel(images):
 2     l1 = conv2d(images,weights["w1"],biases["b1"])
 3     l2 = pooling(l1)
 4     l2 = norm(l2)
 5     l3 = conv2d(l2,weights["w2"],biases["b2"])
 6     l4 = pooling(l3)
 7     l4 = norm(l4)
 8     l5 = conv2d(l4,weights["w3"],biases["b3"])
 9     #same as the batch size
10     l6 = pooling(l5)
11     l6 = tf.reshape(l6,[-1,weights["w4"].get_shape().as_list()[0]])
12     l7 = tf.nn.relu(tf.matmul(l6,weights["w4"])+biases["b4"])
13     soft_max = tf.add(tf.matmul(l7,weights["wo"]),biases["bo"])
14     return soft_max

  模型比較簡單,使用三層卷積,第11行使用全鏈接,須要對特徵向量進行reshape,其中l6的形狀爲[-1,w4的第1維的參數],所以,將其按照「w4」reshape的時候,要使得-1位置的大小爲batch_size,這樣,最終再乘以「wo」時,最終的輸出大小爲[batch_size,class_num]

(4)定義評估量

1 def loss(logits,label_batches):
2     cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,labels=label_batches)
3     cost = tf.reduce_mean(cross_entropy)
4     return cost

  首先定義損失函數,這是用於訓練最小化損失的必需量

1 def get_accuracy(logits,labels):
2     acc = tf.nn.in_top_k(logits,labels,1)
3     acc = tf.cast(acc,tf.float32)
4     acc = tf.reduce_mean(acc)
5     return acc

  評價分類準確率的量,訓練時,須要loss值減少,準確率增長,這樣的訓練纔是收斂的。

(5)定義訓練方式

1 def training(loss,lr):
2     train_op = tf.train.RMSPropOptimizer(lr,0.9).minimize(loss)
3     return train_op

  有不少種訓練方式,能夠自行去官網查看,可是不一樣的訓練方式可能對應前面的參數定義不同,須要另行處理,不然可能報錯。

 4 訓練

 1 def run_training():
 2     data_dir = 'C:/Users/wk/Desktop/bky/dataSet/'
 3     image,label = inputData.get_files(data_dir)
 4     image_batches,label_batches = inputData.get_batches(image,label,32,32,16,20)
 5     p = model.mmodel(image_batches)
 6     cost = model.loss(p,label_batches)
 7     train_op = model.training(cost,0.001)
 8     acc = model.get_accuracy(p,label_batches)
 9     
10     sess = tf.Session()
11     init = tf.global_variables_initializer()
12     sess.run(init)
13     
14     coord = tf.train.Coordinator()
15     threads = tf.train.start_queue_runners(sess = sess,coord = coord)
16     
17     try:
18        for step in np.arange(1000):
19            print(step)
20            if coord.should_stop():
21                break
22            _,train_acc,train_loss = sess.run([train_op,acc,cost])
23            print("loss:{} accuracy:{}".format(train_loss,train_acc))
24     except tf.errors.OutOfRangeError:
25         print("Done!!!")
26     finally:
27         coord.request_stop()
28     coord.join(threads)
29     sess.close()
相關文章
相關標籤/搜索