卷積神經網絡(Convolutional Neural Network,CNN)是一種前饋神經網絡,它的人工神經元能夠響應一部分覆蓋範圍內的周圍單元,對於大型圖像處理有出色表現。[2] 它包括卷積層(convolutional layer)和池化層(pooling layer)。git
import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels trX = trX.reshape(-1, 28, 28, 1)#28*28*1 input image teX = teX.reshape(-1, 28, 28, 1) X = tf.placeholder("float", [None, 28, 28, 1]) Y = tf.placeholder("float", [None, 10]) conv_dropout = tf.placeholder("float") dense_dropout = tf.placeholder("float") w1 = tf.Variable(tf.radom_normal([3, 3, 1, 32], stddev=0.01)) w2 = tf.Variable(tf.radom_normal([3, 3, 32, 64], stddev=0.01)) w3 = tf.Variable(tf.radom_normal([3, 3, 64, 128], stddev=0.01)) w4 = tf.Variable(tf.radom_normal([4*4*128, 1024], stddev=0.01)) wo = tf.Variable(tf.random_normal([1024, 10], stddev=0.01)) #卷積和池化、dropout def conv_and_pool(x, w, step, dropout): x = tf.nn.relu(tf.nn.conv2d(x, w, strides=[1,1,1,1], padding='SAME')) x = tf.nn.max_pool(x, ksize=[1, step, step, 1], strides=[1, step, step, 1], padding='SAME') x = tf.nn.dropout(dropout) return x #構建模型 def conv_model(x, w1, w2, w3, w4, wo, dropout, dense_do): x = conv_and_pool(x, w1, 2, 0.5)#第一層卷積 x = conv_and_pool(x, w2, 2, 0.5)#第二層卷積 x = conv_and_pool(x, w3, 2, 0.5)#第三層卷積 x = tf.nn.relu(tf.nn.matmul(x, w4))#全鏈接 x = tf.nn.dropout(x, dense_do)#dropout,防止過擬合 x = tf.nn.relu(tf.nn.matmul(x, wo))#輸出預測分類 return x; py_x = conv_model(X, w1, w2, w3, w4, wo, conv_dropout, dense_dropout) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)) train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minize(cost) predict_op = tf.argmax(py_x, 1) batch_size = 128 test_size = 256 #訓練模型和評估模型 with tf.Sesseion() as sess: tf.global_variables_initializer().run() for i in range(100): training_batch = zip(range(0, len(trX), batch_size), range(batch_size, len(trX)+1, batch_size)) for start, end in training_batch: sess.run(train_op, feed_dict={X:trX[start:end], Y:trY[start:end], conv_dropout:0.8, dense_dropout:0.5}) test_indices = np.arange(len(txX)) np.random.shuffle(test_indices) test_indices = test_indices[0:test_size] print(i, np.mean(np.ragmax(teY[test_indices], axis=1) == sess.run(predict_op, feed_dict={X:teX[test_indices], conv_dropout:1.0, dense_dropout:1.0})))
0.179688
0.453125
0.671875
0.773438
0.765625
0.789062
0.804688
0.84375
0.796875
0.828125
...
0.953125
0.921875
0.945312
0.9375
0.914062
0.929688
0.953125
0.9375算法
在傳統的神經網絡模型中,是從輸入層到隱含層再到輸出層,層與層之間是全鏈接的,每層之間的節點是無鏈接的。可是這種普通的神經網絡對於不少問題卻無能無力。例如,你要預測句子的下一個單詞是什麼,通常須要用到前面的單詞,由於一個句子中先後單詞並非獨立的。RNN(Recurrent Neuron Network)是一種對序列數據建模的神經網絡,即一個序列當前的輸出與前面的輸出也有關。具體的表現形式爲網絡會對前面的信息進行記憶並應用於當前輸出的計算中,即隱藏層之間的節點再也不無鏈接而是有鏈接的,而且隱藏層的輸入不只包括輸入層的輸出還包括上一時刻隱藏層的輸出。
RNN在天然語言處理領域的如下幾個方向應用得很是成功:網絡
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from tensorflow.contrib import rnn tf.set_random_seed(1) mnist = input_data.read_data_sets('/tmp/data', one_hot=True) optimize_op = 0.01 train_count = 100000 batch_size = 128 # n_inputs = 28 n_steps = 28 n_hidden_units = 128 n_classes = 10 x = tf.placeholder(tf.float32, [None, 28, 28]) y = tf.placeholder(tf.float32, [None, 10]) weights = { 'in': tf.Variable(tf.random_normal([28, 128])), 'out': tf.Variable(tf.random_normal([128, 10])), } baises = { 'in': tf.Variable(tf.constant(0.1, shape=[128, ])), 'out': tf.Variable(tf.constant(0.1, shape=[10, ])), } def RNN(X, weights, baises): #Xtransform to [128*28, 28] X = tf.reshape(X, [-1, 28]) X_in = tf.matmul(X, weights['in']) + baises['in'] #[128*28, 128]->vonvert[128, 28, 128] X_in = tf.reshape(X_in, [-1, 28, 128]) lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True) init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32) #dynamic_rnn #outputs, final_state = rnn.static_rnn(lstm_cell, X_in, initial_state=init_state) outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, X_in, initial_state=init_state, time_major=False) results = tf.matmul(final_state[1], weights['out']) + baises['out'] return results; pred = RNN(x, weights, baises) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) train_op = tf.train.AdamOptimizer(optimize_op).minimize(cost) correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) step = 0 while step * batch_size < train_count: batch_xs, batch_ys = mnist.train.next_batch(batch_size) batch_xs = batch_xs.reshape([batch_size, 28, 28]) sess.run([train_op], feed_dict={ x: batch_xs, y: batch_ys, }) if step % 20 == 0: print(sess.run(accuracy, feed_dict={x:batch_xs, y:batch_ys,})) step += 1
0.179688
0.453125
0.671875
0.773438
...
0.9375
0.914062
0.929688
0.953125
0.9375dom
自編碼器是神經網絡的一種,是一種無監督學習方法,使用了反向傳播算法,目標是使輸出=輸入。 自編碼器內部有隱藏層 ,能夠產生編碼表示輸入。自編碼器主要做用在於經過復現輸出而捕捉能夠表明輸入的重要因素,利用中間隱層對輸入的壓縮表達,達到像PCA那樣的找到原始信息主成分的效果。ide
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from tensorflow.contrib import rnn import matplotlib.pyplot as plt import numpy as np tf.set_random_seed(1) mnist = input_data.read_data_sets('/tmp/data', one_hot=True) learning_rate = 0.01 training_epochs = 20 batch_size = 256#batch size for once training display_step = 1 examples_to_show = 10#images to show in view n_hidden_1 = 256#first hidden layer feature count n_hidden_2 = 128#second hidden layer feature count n_input = 784 #input data count X = tf.placeholder("float", [None, n_input])#input image data weights = { 'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])), 'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])), } biases = { 'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])), 'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'decoder_b2': tf.Variable(tf.random_normal([n_input])), } def encoder(x): layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) return layer_2 def decoder(x): layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']), biases['decoder_b1'])) layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']), biases['decoder_b2'])) return layer_2
encoder_op = encoder(X)#encoder image data
decoder_op = decoder(encoder_op)#decoder image data學習
y_pred = decoder_op#prediction image data
y_true = X
cost = tf.reduce_mean(tf.pow(y_pred - y_true, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:測試
sess.run(init) total_batch = int(mnist.train.num_examples/batch_size) for epoch in range(training_epochs): for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) _, c = sess.run([optimizer, cost], feed_dict={X:batch_xs}) if epoch %display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c)) print ("Optimization Finished!") encode_decode = sess.run(y_pred, feed_dict={X: mnist.test.images[:examples_to_show]}) f, a = plt.subplots(2, 10, figsize=(10, 2))#繪圖比較原始圖片和編碼網絡重建結果 print ("after plt.subplots") for i in range(examples_to_show): a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))#測試集 a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))#重建結果 f.show() plt.draw()