1.tf.convert_to_tensor:傳入的list必須是一個有固定長度的list,若是爲2維的list,第二維的list的長度必須是固定。函數
2.tf.layers.conv1d(),默認寬卷積,裏面的參數filter_size,爲卷積核的height,而卷積核的width爲輸入的width學習
傳入的是一個[batch_size,width,height],輸出的是一個[batch_size,height,filter_num]spa
3.tf.layers.conv2d(),裏面的參數filter_size爲一個tuple,爲卷積核的大小設計
傳入的是一個[batch_size,width,height,channel_num],輸入的是一個[batch_size,after_conv_width,afer_conv_height,filter_num]code
4.tf.stack():拼接矩陣,能夠指定維數,若是指定維數的話,那麼拼接的就是指定維數的數據,若是不指定維數,那麼就是拼接所有。blog
5.tf.unstack():拆解一個矩陣,能夠指定維度。ip
tensorflow tf.split 和 tf.unstack 實例
import tensorflow as tf A = [[1, 2, 3], [4, 5, 6]] a0 = tf.split(A, num_or_size_splits=3, axis=1) a1 = tf.unstack(A, num=3,axis=1) a2 = tf.split(A, num_or_size_splits=2, axis=0) a3 = tf.unstack(A, num=2,axis=0) with tf.Session() as sess: print(sess.run(a0)) print(sess.run(a1)) print(sess.run(a2)) print(sess.run(a3))
[array([[1],[4]]), array([[2],[5]]), array([[3],[6]])]
[array([1, 4]), array([2, 5]), array([3, 6])]
[array([[1, 2, 3]]), array([[4, 5, 6]])]
[array([1, 2, 3]), array([4, 5, 6])]get
tf.squeeze()input
去掉維數爲1的維度。
舉個栗子:it
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] tf.shape(tf.squeeze(t)) # [2, 3]
也能夠指定去掉哪一個維度:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] tf.shape(tf.squeeze(t, [2, 4])) # [1, 2, 3, 1]
6.tf.assign()
https://www.jianshu.com/p/fc5f0f971b14
7.當出現變量已經存在時,要添加tf.name_scope()
8.tf.multiply() 點乘, 區別於tf.matul
a.若是是一個矩陣乘以一個數,矩陣裏面每個數都乘以這個數。
b.能夠一個數乘以一個數
9.tf.contrib.rnn.static_bidirectional_rnn:組合正序和逆序,輸入: [sequence_length, batch_size, hidden_size],輸出: [sequence, batch_size, 2 * output_size], forward_state: [batch_size, 2 * output_size], backward_size: [batch_size, 2 * output_size]
encoder_outputs, forward_state, backward_state = rnn.static_bidirectional_rnn(forward_cell, backward_cell, input_content_emb, dtype=tf.float32, sequence_length=self.config.sequence_length)
10.tf.greater():
tf.greater 的輸入是兩個張量,此函數會比較這兩個輸入張量中每個元素的大小,並返回比較結果。當tf.greater 的輸入張量維度不同時, Ten sorFlow 會進行相似NumPy 廣播操做( broadcasting )的處理。
11.tf.where()
tf.where函數有三個參數。第一個爲選擇條件根據, 當選擇條件爲True 時, tf.where 函數會選擇第二個參數中的值, 不然使用第三個參數中的值。
12.tf.train.exponential decay()
1.learning_rate:初始化的學習率
2.global_step:當前的步數
3.decay_steps:多少步衰減一次。
4.decay_rate: 衰減率
5.staricase:是否爲階梯衰減,設計爲True的話, decay_steps=len(dataset)/len(batch)
13.tf.contrib.layers.l2_regularizer(weight_decay)(weight)/tf.contrib.layers.l1_regularizer(weight_decay)(weight)
正則化,下圖爲實例:
14. tf.variable_scope() tf.get variable()
15. tf.abs() 求絕對值
16. tf.tile()
tf.tile( input, #輸入 multiples, #某一維度上覆制的次數 name=None )
import tensorflow as tf a = tf.tile([1,2,3],[2]) b = tf.tile([[1,2], [3,4], [5,6]],[2,3]) with tf.Session() as sess: print(sess.run(a)) print(sess.run(b))
17. tf.einsum()
# Matrix multiplication >>> einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k] # Dot product >>> einsum('i,i->', u, v) # output = sum_i u[i]*v[i] # Outer product >>> einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j] # Transpose >>> einsum('ij->ji', m) # output[j,i] = m[i,j] # Batch matrix multiplication >>> einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k]