clip_by_valuedom
relucode
clip_by_normorm
gradient clippingip
import tensorflow as tf
a = tf.range(10) a
<tf.Tensor: id=3, shape=(10,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)>
# a中小於2的元素值爲2 tf.maximum(a, 2)
<tf.Tensor: id=6, shape=(10,), dtype=int32, numpy=array([2, 2, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)>
# a中大於8的元素值爲8 tf.minimum(a, 8)
<tf.Tensor: id=9, shape=(10,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 8], dtype=int32)>
# a中的元素值限制在[2,8]區間內 tf.clip_by_value(a, 2, 8)
<tf.Tensor: id=14, shape=(10,), dtype=int32, numpy=array([2, 2, 2, 3, 4, 5, 6, 7, 8, 8], dtype=int32)>
a = a - 5 a
<tf.Tensor: id=17, shape=(10,), dtype=int32, numpy=array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4], dtype=int32)>
tf.nn.relu(a)
<tf.Tensor: id=19, shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 1, 2, 3, 4], dtype=int32)>
tf.maximum(a, 0)
<tf.Tensor: id=22, shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 1, 2, 3, 4], dtype=int32)>
a = tf.random.normal([2, 2], mean=10) a
<tf.Tensor: id=35, shape=(2, 2), dtype=float32, numpy= array([[ 8.630464, 10.737844], [ 9.764073, 10.382202]], dtype=float32)>
tf.norm(a)
<tf.Tensor: id=41, shape=(), dtype=float32, numpy=19.822044>
# 等比例的放縮a, norm爲15 aa = tf.clip_by_norm(a, 15) aa
<tf.Tensor: id=58, shape=(2, 2), dtype=float32, numpy= array([[6.5309587, 8.125684 ], [7.388799 , 7.8565574]], dtype=float32)>
tf.norm(aa)
<tf.Tensor: id=64, shape=(), dtype=float32, numpy=15.0>
Gradient Exploding or vanishingit
set lr=1class
new_grads,total_norm = tf.clip_by_global_norm(grads,25)import
裁剪全部向量,可是全部向量的梯度方向都不變化cli