返回一個用來執行L1
正則化的函數,函數的簽名是func(weights)
.
參數:python
先看看tf.contrib.layers.l2_regularizer(weight_decay)都執行了什麼:後端
import tensorflow as tf sess=tf.Session() weight_decay=0.1 tmp=tf.constant([0,1,2,3],dtype=tf.float32) """ l2_reg=tf.contrib.layers.l2_regularizer(weight_decay) a=tf.get_variable("I_am_a",regularizer=l2_reg,initializer=tmp) """ #**上面代碼的等價代碼 a=tf.get_variable("I_am_a",initializer=tmp) a2=tf.reduce_sum(a*a)*weight_decay/2; a3=tf.get_variable(a.name.split(":")[0]+"/Regularizer/l2_regularizer",initializer=a2) tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES,a2) #** sess.run(tf.global_variables_initializer()) keys = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) for key in keys: print("%s : %s" %(key.name,sess.run(key)))
import tensorflow as tf sess=tf.Session() weight_decay=0.1 #(1)定義weight_decay l2_reg=tf.contrib.layers.l2_regularizer(weight_decay) #(2)定義l2_regularizer() tmp=tf.constant([0,1,2,3],dtype=tf.float32) a=tf.get_variable("I_am_a",regularizer=l2_reg,initializer=tmp) #(3)建立variable,l2_regularizer複製給regularizer參數。 #目測REXXX_LOSSES集合 #regularizer定義會將a加入REGULARIZATION_LOSSES集合 print("Global Set:") keys = tf.get_collection("variables") for key in keys: print(key.name) print("Regular Set:") keys = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) for key in keys: print(key.name) print("--------------------") sess.run(tf.global_variables_initializer()) print(sess.run(a)) reg_set=tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) #(4)則REGULARIAZTION_LOSSES集合會包含全部被weight_decay後的參數和,將其相加 l2_loss=tf.add_n(reg_set) print("loss=%s" %(sess.run(l2_loss))) """ 此處輸出0.7,即: weight_decay*sigmal(w*2)/2=0.1*(0*0+1*1+2*2+3*3)/2=0.7 其實代碼本身寫也很方便,用API看着比較正規。 在網絡模型中,直接將l2_loss加入loss就行了。(loss變大,執行train天然會decay) """
正則化經常使用到集合,下面是最原始的添加正則辦法(直接在變量聲明後將之添加進'losses'集合或tf.GraphKeys.LOESSES也行):網絡
import tensorflow as tf import numpy as np def get_weights(shape, lambd): var = tf.Variable(tf.random_normal(shape), dtype=tf.float32) tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(lambd)(var)) return var x = tf.placeholder(tf.float32, shape=(None, 2)) y_ = tf.placeholder(tf.float32, shape=(None, 1)) batch_size = 8 layer_dimension = [2, 10, 10, 10, 1] n_layers = len(layer_dimension) cur_lay = x in_dimension = layer_dimension[0] for i in range(1, n_layers): out_dimension = layer_dimension[i] weights = get_weights([in_dimension, out_dimension], 0.001) bias = tf.Variable(tf.constant(0.1, shape=[out_dimension])) cur_lay = tf.nn.relu(tf.matmul(cur_lay, weights)+bias) in_dimension = layer_dimension[i] mess_loss = tf.reduce_mean(tf.square(y_-cur_lay)) tf.add_to_collection('losses', mess_loss) loss = tf.add_n(tf.get_collection('losses'))
先看參數app
None
的話,就取GraphKeys.WEIGHTS
中的weights
.函數返回一個標量Tensor
,同時,這個標量Tensor
也會保存到GraphKeys.REGULARIZATION_LOSSES
中.這個Tensor
保存了計算正則項損失的方法.dom
tensorflow
中的Tensor
是保存了計算這個值的路徑(方法),當咱們run的時候,tensorflow
後端就經過路徑計算出Tensor
對應的值函數
如今,咱們只需將這個正則項損失加到咱們的損失函數上就能夠了.spa
若是是本身手動定義
weight
的話,須要手動將weight
保存到GraphKeys.WEIGHTS
中,可是若是使用layer
的話,就不用這麼麻煩了,別人已經幫你考慮好了.(最好本身驗證一下tf.GraphKeys.WEIGHTS
中是否包含了全部的weights
,防止被坑)code
使用slim會簡單不少:orm
with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_regularizer=slim.l2_regularizer(weight_decay)): pass
此時添加集合爲tf.GraphKeys.REGULARIZATION_LOSSES。blog