轉自:https://blog.csdn.net/gsww404/article/details/78605784python
TensorBoard 和 TensorFLow 程序跑在不一樣的進程中,TensorBoard 會自動讀取最新的 TensorFlow 日誌文件,並呈現當前 TensorFLow 程序運行的最新狀態。算法
- 添加記錄節點:
tf.summary.scalar/image/histogram()
等- 彙總記錄節點:
merged = tf.summary.merge_all()
- 運行彙總節點:
summary = sess.run(merged)
,獲得彙總結果- 日誌書寫器實例化:
summary_writer = tf.summary.FileWriter(logdir, graph=sess.graph)
,實例化的同時傳入 graph 將當前計算圖寫入日誌- 調用日誌書寫器實例對象
summary_writer
的add_summary(summary, global_step=i)
方法將全部彙總日誌寫入文件- 調用日誌書寫器實例對象
summary_writer
的close()
方法寫入內存,不然它每隔120s寫入一次
...create a graph...
# Launch the graph in a session. sess = tf.Session() # Create a summary writer, add the 'graph' to the event file. writer = tf.summary.FileWriter(logdir, sess.graph) writer.close() # 關閉時寫入內存,不然它每隔120s寫入一次
tf.summary.scalar(name, tensor, collections=None, family=None)
ubuntu
可視化訓練過程當中隨着迭代次數準確率(val acc)、損失值(train/test loss)、學習率(learning rate)、每一層的權重和偏置的統計量(mean、std、max/min)等的變化曲線api
輸入參數:瀏覽器
- name:此操做節點的名字,TensorBoard 中繪製的圖形的縱軸也將使用此名字
- tensor: 須要監控的變量 A real numeric Tensor containing a single value.
輸出:bash
- A scalar Tensor of type string. Which contains a Summary protobuf.
tf.summary.image(name, tensor, max_outputs=3, collections=None, family=None)
服務器
可視化
當前輪
訓練使用的訓練/測試圖片或者 feature maps網絡輸入參數:session
- name:此操做節點的名字,TensorBoard 中繪製的圖形的縱軸也將使用此名字
- tensor: A r A 4-D uint8 or float32 Tensor of shape
[batch_size, height, width, channels]
where channels is 1, 3, or 4- max_outputs:Max number of batch elements to generate images for
輸出:dom
- A scalar Tensor of type string. Which contains a Summary protobuf.
tf.summary.histogram(name, values, collections=None, family=None)
可視化張量的取值分佈
輸入參數:
- name:此操做節點的名字,TensorBoard 中繪製的圖形的縱軸也將使用此名字
- tensor: A real numeric Tensor. Any shape. Values to use to build the histogram
輸出:
- A scalar Tensor of type string. Which contains a Summary protobuf.
tf.summary.merge_all(key=tf.GraphKeys.SUMMARIES)
- Merges all summaries collected in the default graph
- 由於程序中定義的寫日誌操做比較多,一一調用很是麻煩,因此TensoorFlow 提供了此函數來整理全部的日誌生成操做,eg:
merged = tf.summary.merge_all ()
- 此操做不會當即執行,因此,須要明確的運行這個操做(
summary = sess.run(merged)
)來獲得彙總結果- 最後調用日誌書寫器實例對象的
add_summary(summary, global_step=i)
方法將全部彙總日誌寫入文件
- 若是 logdir 目錄的子目錄中包含另外一次運行時的數據(多個 event),那麼 TensorBoard 會展現全部運行的數據(主要是scalar),這樣能夠用於比較不一樣參數下模型的效果,調節模型的參數,讓其達到最好的效果!
- 上面那條線是迭代200次的loss曲線圖,下面那條是迭代400次的曲線圖,程序見最後。
- 使用命名空間使可視化效果圖更有層次性,使得神經網絡的總體結構不會被過多的細節所淹沒
- 同一個命名空間下的全部節點會被縮略成一個節點,只有頂層命名空間中的節點纔會被顯示在 TensorBoard 可視化效果圖上
- 可經過
tf.name_scope()
或者tf.variable_scope()
來實現,具體見最後的程序。
tf.summary.FileWriter(logdir, graph=None, flush_secs=120, max_queue=10)
- 負責將事件日誌(graph、scalar/image/histogram、event)寫入到指定的文件中
初始化參數:
- logdir:事件寫入的目錄
- graph:若是在初始化的時候傳入
sess,graph
的話,至關於調用add_graph()
方法,用於計算圖的可視化- flush_sec:How often, in seconds, to flush the
added summaries and events
to disk.- max_queue:Maximum number of
summaries or events
pending to be written to disk before one of the ‘add’ calls block.其它經常使用方法:
add_event(event)
:Adds an event to the event fileadd_graph(graph, global_step=None)
:Adds a Graph to the event file,Most users pass a graph in the constructor insteadadd_summary(summary, global_step=None)
:Adds a Summary protocol buffer to the event file,必定注意要傳入 global_stepclose()
:Flushes the event file to disk and close the fileflush()
:Flushes the event file to diskadd_meta_graph(meta_graph_def,global_step=None)
add_run_metadata(run_metadata, tag, global_step=None)
logs
)生成 event
文件logs
所在目錄,按住 shift
鍵,點擊右鍵選擇在此處打開 cmd
cmd
中,輸入如下命令啓動 tensorboard --logdir=logs
,注意:logs的目錄並不須要加引號, logs 中有多個event 時,會生成scalar 的對比圖,但 graph 只會展現最新的結果http://DESKTOP-S2Q1MOS:6006 # 每一個人的可能不同
) copy 到瀏覽器中打開便可 python my_program.py
),在指定目錄下(logs
)生成 event
文件bash
中,輸入如下命令啓動 tensorboard --logdir=logs --port=8888
,注意:logs的目錄並不須要加引號,端口號必須是事先在路由器中配置好的http://ubuntu16:8888 # 把ubuntu16 換成服務器的外部ip地址便可
) copy 到本地瀏覽器中打開便可
- 多個event的
loss
對比圖以及網絡結構圖(graph
)已經在上面展現了,這裏就不重複了。- 最下面展現了網絡的訓練過程以及最終擬合效果圖
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # 準備訓練數據,假設其分佈大體符合 y = 1.2x + 0.0 n_train_samples = 200 X_train = np.linspace(-5, 5, n_train_samples) Y_train = 1.2*X_train + np.random.uniform(-1.0, 1.0, n_train_samples) # 加一點隨機擾動 # 準備驗證數據,用於驗證模型的好壞 n_test_samples = 50 X_test = np.linspace(-5, 5, n_test_samples) Y_test = 1.2*X_test # 參數學習算法相關變量設置 learning_rate = 0.01 batch_size = 20 summary_dir = 'logs' print('~~~~~~~~~~開始設計計算圖~~~~~~~~') # 使用 placeholder 將訓練數據/驗證數據送入網絡進行訓練/驗證 # shape=None 表示形狀由送入的張量的形狀來肯定 with tf.name_scope('Input'): X = tf.placeholder(dtype=tf.float32, shape=None, name='X') Y = tf.placeholder(dtype=tf.float32, shape=None, name='Y') # 決策函數(參數初始化) with tf.name_scope('Inference'): W = tf.Variable(initial_value=tf.truncated_normal(shape=[1]), name='weight') b = tf.Variable(initial_value=tf.truncated_normal(shape=[1]), name='bias') Y_pred = tf.multiply(X, W) + b # 損失函數(MSE) with tf.name_scope('Loss'): loss = tf.reduce_mean(tf.square(Y_pred - Y), name='loss') tf.summary.scalar('loss', loss) # 參數學習算法(Mini-batch SGD) with tf.name_scope('Optimization'): optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # 初始化全部變量 init = tf.global_variables_initializer() # 彙總記錄節點 merge = tf.summary.merge_all() # 開啓會話,進行訓練 with tf.Session() as sess: sess.run(init) summary_writer = tf.summary.FileWriter(logdir=summary_dir, graph=sess.graph) for i in range(201): j = np.random.randint(0, 10) # 總共200訓練數據,分十份[0, 9] X_batch = X_train[batch_size*j: batch_size*(j+1)] Y_batch = Y_train[batch_size*j: batch_size*(j+1)] _, summary, train_loss, W_pred, b_pred = sess.run([optimizer, merge, loss, W, b], feed_dict={X: X_batch, Y: Y_batch}) test_loss = sess.run(loss, feed_dict={X: X_test, Y: Y_test}) # 將全部日誌寫入文件 summary_writer.add_summary(summary, global_step=i) print('step:{}, losses:{}, test_loss:{}, w_pred:{}, b_pred:{}'.format(i, train_loss, test_loss, W_pred[0], b_pred[0])) if i == 200: # plot the results plt.plot(X_train, Y_train, 'bo', label='Train data') plt.plot(X_test, Y_test, 'gx', label='Test data') plt.plot(X_train, X_train * W_pred + b_pred, 'r', label='Predicted data') plt.legend() plt.show() summary_writer.close()