tensorflow tf.train.Supervisor做用

tf.train.Supervisor能夠簡化編程,避免顯示地實現restore操做.經過一個例子看.html

import tensorflow as tf
import numpy as np
import os
log_path = r"D:\Source\model\linear"
log_name = "linear.ckpt"
# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3

# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but TensorFlow will
# figure that out for us.)
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b

# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# Before starting, initialize the variables.  We will 'run' this first.
saver = tf.train.Saver()
init = tf.global_variables_initializer()

# Launch the graph.
sess = tf.Session()
sess.run(init)

if len(os.listdir(log_path)) != 0:  # 已經有模型直接讀取
    saver.restore(sess, os.path.join(log_path, log_name))
for step in range(201):
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(W), sess.run(b))
saver.save(sess, os.path.join(log_path, log_name))

這段代碼是對tensorflow官網上的demo作一個微小的改動.若是模型已經存在,就先讀取模型接着訓練.tf.train.Supervisor能夠簡化這個步驟.看下面的代碼.編程

import tensorflow as tf
import numpy as np
import os
log_path = r"D:\Source\model\supervisor"
log_name = "linear.ckpt"
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3

W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b

loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

saver = tf.train.Saver()
init = tf.global_variables_initializer()

sv = tf.train.Supervisor(logdir=log_path, init_op=init)  # logdir用來保存checkpoint和summary
saver = sv.saver  # 建立saver
with sv.managed_session() as sess:  # 會自動去logdir中去找checkpoint,若是沒有的話,自動執行初始化
    for i in range(201):
        sess.run(train)
        if i % 20 == 0:
            print(i, sess.run(W), sess.run(b))
    saver.save(sess, os.path.join(log_path, log_name))

sv = tf.train.Supervisor(logdir=log_path, init_op=init)會判斷模型是否存在.若是存在,會自動讀取模型.不用顯式地調用restore.session

參考資料

  1. tensorflow官方文檔
  2. tensorflow學習筆記(二十二):Supervisor
相關文章
相關標籤/搜索