這一部分,咱們將會建立一個線性迴歸模型,在此以前咱們先來看一看將會在代碼中用到的TF基本函數:python
建立隨機正態分佈git
w是一個變量,大小爲784*10,隨機取值,標準差爲0.01github
w=tf.Variable(tf.random_normal([784, 10], stddev=0.01))
平均值dom
b = tf.Variable([10,20,30,40,50,60],name='t') with tf.Session() as sess: sess.run(tf.initialize_all_variables()) sess.run(tf.reduce_mean(b))
輸出35函數
學習
a=[ [0.1, 0.2, 0.3 ], [20, 2, 3 ] ] b = tf.Variable(a,name='b') with tf.Session() as sess: sess.run(tf.initialize_all_variables()) sess.run(tf.argmax(b,1))
輸出a優化
線性迴歸練習spa
問題描述:在線性迴歸中咱們會使用一條直線來擬合數據點,使得偏差最小,下面的例子當中咱們將會建立一百個數據點。orm
trainX在-1和+1之間,trainY是trainX的三倍外加一些隨機值
import tensorflow as tf import numpy as np trainX = np.linspace(-1, 1, 101) trainY = 3 * trainX + np.random.randn(*trainX.shape) * 0.33
X = tf.placeholder("float") Y = tf.placeholder("float")
線性迴歸模型爲y_model(預測)=w*x,咱們須要計算w,損失函數定義爲(Y-y_model(預測))^2,TensorFlow提供了許多優化器,能夠在每次迭代以後計算和更新梯度,同時儘可能減小指定的損失函數。blog
咱們將使用GradientDescentOptimizer最小化損失函數使用0.01的學習率。
w = tf.Variable(0.0, name="weights") y_model = tf.multiply(X, w) cost = (tf.pow(Y-y_model, 2)) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
init = tf.initialize_all_variables() #init= tf.global_variables_initializer() for new tf
with tf.Session() as sess: sess.run(init) for i in range(100): for (x, y) in zip(trainX, trainY): sess.run(train_op, feed_dict={X: x, Y: y}) print(sess.run(w))
最後的結果應該在3左右。
完整的代碼在這裏:https://github.com/sankit1/cv-tricks.com
reference:https://cv-tricks.com/artificial-intelligence/deep-learning/deep-learning-frameworks/tensorflow/tensorflow-tutorial/