一看就懂的Tensorflow實戰(Tensorflow入門)

MNIST 數據集入門

MNIST 數據集簡介


數字手寫體識別數據集,經常使用來做爲Deep Learning入門的基礎數據集。它有60000個訓練樣本集和10000個測試樣本集,每一個樣本圖像的寬高爲 28 * 28。此數據集是以二進制存儲的,不能直接以圖像格式查看。javascript


數據集大小:~12MBcss

下載地址:http://yann.lecun.com/exdb/mnist/index.htmlhtml

導入tensorflow(v1.6.0版本

import tensorflow as tf
print(tf.__version__)

1.6.0

tensorflow加載MNIST數據集

# Import MNIST
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("../../data/", one_hot=True)

# Load data
X_train = mnist.train.images
Y_train = mnist.train.labels
X_test = mnist.test.images
Y_test = mnist.test.labels

train-images-idx3-ubyte.gz:  training set images (9912422 bytes)
train-labels-idx1-ubyte.gz:  training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz:   test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz:   test set labels (4542 bytes)
Extracting ../../data/train-images-idx3-ubyte.gz
Extracting ../../data/train-labels-idx1-ubyte.gz
Extracting ../../data/t10k-images-idx3-ubyte.gz
Extracting ../../data/t10k-labels-idx1-ubyte.gz

查看並可視化MNIST數據集

import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Get the next 64 images array and labels
batch_X, batch_Y = mnist.train.next_batch(64)
print(batch_X.astype,batch_X.shape) # (64,28*28)
print(batch_Y.astype,batch_Y.shape) # (64,10[0-9哪一類])
print(batch_Y[0]) # [ 0.  0.  0.  0.  0.  0.  1.  0.  0.  0.]
plt.imshow(np.reshape(batch_X[0], [28, 28]), cmap='gray')

<built-in method astype of numpy.ndarray object at 0x0000016EEF769080> (64, 784)
<built-in method astype of numpy.ndarray object at 0x0000016EEF772620> (64, 10)
[ 0.  0.  0.  0.  0.  0.  1.  0.  0.  0.]
<matplotlib.image.AxesImage at 0x16eef82a630>
png

tensorflow入門(hello world)

import tensorflow as tf
# TensorFlow 實現簡單的 hello world

# 建立一個常量操做
# 這個常量操做會在默認的圖中添加一個節點
#
# 構造函數返回的值表示常量op的輸出。
hello = tf.constant('Hello, TensorFlow!')
# 啓動 tf session
sess = tf.Session()
# 運行圖
print(sess.run(hello))

b'Hello, TensorFlow!'

tensorflow入門(基本操做)

常量操做

# 常量基本操做
a = tf.constant(2)
b = tf.constant(3)
# 啓動默認的圖
with tf.Session() as sess:
   print ("a: %i" % sess.run(a), "b: %i" % sess.run(b))
   print ("Addition with constants: %i" % sess.run(a+b))
   print ("Multiplication with constants: %i" % sess.run(a*b))

a: 2 b: 3
Addition with constants: 5
Multiplication with constants: 6

變量操做

# 做爲圖輸入變量的基本操做。
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
# tf 中定義的操做
add = tf.add(a, b) #加法操做
mul = tf.multiply(a, b) #乘法操做
# 啓動默認的圖
with tf.Session() as sess:
   # 使用變量輸入運行每一個操做。
   print ("Addition with variables: %i" % sess.run(add, feed_dict={a: 2, b: 3}))
   print ("Multiplication with variables: %i" % sess.run(mul, feed_dict={a: 2, b: 3}))

Addition with variables: 5
Multiplication with variables: 6

矩陣操做

# 建立一個生成1x2矩陣的常數op。
matrix1 = tf.constant([[3., 3.]])
# 建立另外一個常數,生成一個2x1矩陣。
matrix2 = tf.constant([[2.],[2.]])
# tf 中定義的操做
product = tf.matmul(matrix1, matrix2) #矩陣乘法操做
with tf.Session() as sess:
   result = sess.run(product)
   print(result)

[[ 12.]]

tensorflow入門(Eager API)

from __future__ import absolute_import, division, print_function

import numpy as np
import tensorflow as tf
import tensorflow.contrib.eager as tfe
# 設置 Eager API
print("Setting Eager mode...")
tfe.enable_eager_execution()

Setting Eager mode...

Eager API 常量操做

# 定義常量 tensors
print("Define constant tensors")
a = tf.constant(2)
print("a = %i" % a)
b = tf.constant(3)
print("b = %i" % b)

Define constant tensors
a = 2
b = 3

# 執行操做不須要 tf.Session
print("Running operations, without tf.Session")
c = a + b
print("a + b = %i" % c)
d = a * b
print("a * b = %i" % d)

Running operations, without tf.Session
a + b = 5
a * b = 6

Eager API 張量操做

# 與 Numpy徹底兼容
print("Mixing operations with Tensors and Numpy Arrays")

# Define constant tensors
a = tf.constant([[2., 1.],
                [1., 0.]], dtype=tf.float32)
print("Tensor:\n a = %s" % a)
b = np.array([[3., 0.],
             [5., 1.]], dtype=np.float32)
print("NumpyArray:\n b = %s" % b)

Mixing operations with Tensors and Numpy Arrays
Tensor:
a = tf.Tensor(
[[2. 1.]
[1. 0.]]
, shape=(2, 2), dtype=float32)
NumpyArray:
b = [[3. 0.]
[5. 1.]]


# 在不須要 tf.Session 的狀況下運行該操做
print("Running operations, without tf.Session")

c = a + b
print("a + b = %s" % c)

d = tf.matmul(a, b)
print("a * b = %s" % d)

Running operations, without tf.Session
a + b = tf.Tensor(
[[5. 1.]
[6. 1.]]
, shape=(2, 2), dtype=float32)
a * b = tf.Tensor(
[[11.  1.]
[ 3.  0.]]
, shape=(2, 2), dtype=float32)

# 遍歷張量
print("Iterate through Tensor 'a':")
for i in range(a.shape[0]):
   for j in range(a.shape[1]):
       print(a[i][j])

Iterate through Tensor 'a':
tf.Tensor(2.0, shape=(), dtype=float32)
tf.Tensor(1.0, shape=(), dtype=float32)
tf.Tensor(1.0, shape=(), dtype=float32)
tf.Tensor(0.0, shape=(), dtype=float32)

參考

[TensorFlow-Examples]https://github.com/aymericdamien/TensorFlow-Examplesjava


-長按關注-python


本文分享自微信公衆號 - AI異構(gh_ed66a0ffe20a)。
若有侵權,請聯繫 support@oschina.cn 刪除。
本文參與「OSC源創計劃」,歡迎正在閱讀的你也加入,一塊兒分享。nginx

相關文章
相關標籤/搜索