大名鼎鼎的UNet和咱們常常看到的編解碼器模型,他們的模型都是先將數據下采樣,也稱爲特徵提取,而後再將下采樣後的特徵恢復回原來的維度。這個特徵提取的過程咱們稱爲「下采樣」,這個恢復的過程咱們稱爲「上採樣」,本文就專一於神經網絡中的下采樣和上採樣來進行一次總結。寫的很差勿怪哈。html
池化層(平均池化層、最大池化層),卷積python
平均池化層git
最大池化層github
還有另一些pool層:nn.LPPool、
算法nn.AdaptiveMaxPool
、nn.AdaptiveAvgPool、
nn.FractionalMaxPool2d
普通卷積api
還有一些獨特的卷積,感興趣的能夠本身去了解網絡
插值方法有不少種有:階梯插值、線性插值、三次樣條插值等等app
numpy的實現方法我在另一篇文章中已經介紹過了,爲了不重複,想要了解的同窗請移步【插值方法及python實現】dom
pytorch實現方法ide
torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None)
對給定多通道的1維(時間)、2維(空間)、3維(體積)數據進行上採樣。
參數
返回:
Input:$(N, C, W_{in}), (N, C, H_{in}, W_{in}) 或(N, C, D_{in}, H_{in}, W_{in})$
Output: $(N, C, W_{out}), (N, C, H_{out}, W_{out}) 或(N, C, D_{out}, H_{out}, W_{out})$
$D_{out}=[D_{in}× \text{scale_factor}]$
$H_{out} = [H_{in} \times \text{scale_factor}]$
$W_{out} = [W_{in} \times \text{scale_factor}]$
Unpooling是在CNN中經常使用的來表示max pooling的逆操做。這是從2013年紐約大學Matthew D. Zeiler和Rob Fergus發表的《Visualizing and Understanding Convolutional Networks》中產生的idea:鑑於max pooling不可逆,所以使用近似的方式來反轉獲得max pooling操做以前的原始狀況
簡單來講,記住作max pooling的時候的最大item的位置,好比一個3x3的矩陣,max pooling的size爲2x2,stride爲1,反捲積記住其位置,其他位置至爲0就行:
$$\left[\begin{array}{lll}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{array}\right]->(\text { maxpooling })\left[\begin{array}{ll}
5 & 6 \\
8 & 9
\end{array}\right]->(\text { unpooling })\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 5 & 6 \\
0 & 8 & 9
\end{array}\right]$$
方法一
def unpool_with_with_argmax(pooled, ind, ksize=[1, 2, 2, 1]): """https://github.com/sangeet259/tensorflow_unpooling To unpool the tensor after max_pool_with_argmax. Argumnets: pooled: the max pooled output tensor ind: argmax indices , the second output of max_pool_with_argmax ksize: ksize should be the same as what you have used to pool Returns: unpooled: the tensor after unpooling Some points to keep in mind :: 1. In tensorflow the indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index ((b * height + y) * width + x) * channels + c 2. Due to point 1, use broadcasting to appropriately place the values at their right locations ! """ # Get the the shape of the tensor in th form of a list input_shape = pooled.get_shape().as_list() # Determine the output shape output_shape = (input_shape[0], input_shape[1] * ksize[1], input_shape[2] * ksize[2], input_shape[3]) # Ceshape into one giant tensor for better workability pooled_ = tf.reshape(pooled, [input_shape[0] * input_shape[1] * input_shape[2] * input_shape[3]]) # The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index ((b * height + y) * width + x) * channels + c # Create a single unit extended cuboid of length bath_size populating it with continous natural number from zero to batch_size batch_range = tf.reshape(tf.range(output_shape[0], dtype=ind.dtype), shape=[input_shape[0], 1, 1, 1]) b = tf.ones_like(ind) * batch_range b_ = tf.reshape(b, [input_shape[0] * input_shape[1] * input_shape[2] * input_shape[3], 1]) ind_ = tf.reshape(ind, [input_shape[0] * input_shape[1] * input_shape[2] * input_shape[3], 1]) ind_ = tf.concat([b_, ind_], 1) ref = tf.Variable(tf.zeros([output_shape[0], output_shape[1] * output_shape[2] * output_shape[3]])) # Update the sparse matrix with the pooled values , it is a batch wise operation unpooled_ = tf.scatter_nd_update(ref, ind_, pooled_) # Reshape the vector to get the final result unpooled = tf.reshape(unpooled_, [output_shape[0], output_shape[1], output_shape[2], output_shape[3]]) return unpooled original_tensor = tf.random_uniform([1, 4, 4, 3], maxval=100, dtype='float32', seed=2) pooled_tensor, max_indices = tf.nn.max_pool_with_argmax(original_tensor, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') print(pooled_tensor.shape) # (1, 2, 2, 3) unpooled_tensor = unpool_with_with_argmax(pooled_tensor, max_indices) print(unpooled_tensor.shape) # (1, 4, 4, 3)
方法二
from tensorflow.python.ops import gen_nn_ops inputs = tf.get_variable(name="a", shape=[64, 32, 32, 4], dtype=tf.float32, initializer=tf.random_normal_initializer(mean=0, stddev=1)) # 最大池化 pool1 = tf.nn.max_pool(inputs, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') print(pool1.shape) # (64, 16, 16, 4) # 最大反池化 grad = gen_nn_ops.max_pool_grad(inputs, # 池化前的tensor,即max pool的輸入 pool1, # 池化後的tensor,即max pool 的輸出 pool1, # 須要進行反池化操做的tensor,能夠是任意shape和pool1同樣的tensor ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') print(grad.shape) # (64, 32, 32, 4)
在tensorflow 2.4版本中官方已經幫咱們實現好了
tf.keras.layers.UpSampling2D(size=(2, 2), data_format=None, interpolation='nearest')
pytorch版本
轉置卷積 (transpose convolution) 也會被稱爲 反捲積(Deconvolution),與Unpooling不一樣,使用反捲積來對圖像進行上採樣是能夠習得的。一般用來對卷積層的結果進行上採樣,使其回到原始圖片的分辨率。
nn.ConvTranspose1d(in_channels=N, out_channels=2N, kernel_size=2*S, stride=S, padding=S//2 + S%2, otuput_padding=S%2)
nn.ConvTranspose2d
pixelshuffle算法的實現流程如上圖,其實現的功能是:將一個[H, W]的低分辨率輸入圖像(Low Resolution),經過Sub-pixel操做將其變爲[r*H, e*W]的高分辨率圖像(High Resolution)。
可是其實現過程不是直接經過插值等方式產生這個高分辨率圖像,而是經過卷積先獲得$r^2$個通道的特徵圖(特徵圖大小和輸入低分辨率圖像一致),而後經過週期篩選(periodic shuffing)的方法獲得這個高分辨率的圖像,其中$r$爲上採樣因子(upscaling factor),也就是圖像的擴大倍率。
[batch, height, width, channels * r * r] --> [batch, height * r, width * r, channels]
tensorflow方法實現
import tensorflow as tf def _phase_shift(I, r): # 相位偏移操做 bsize, a, b, c = I.get_shape().as_list() bsize = tf.shape(I)[0] # Handling Dimension(None) type for undefined batch dim X = tf.reshape(I, (bsize, a, b, r, r)) X = tf.transpose(X, (0, 1, 2, 4, 3)) # bsize, a, b, 1, 1 X = tf.split(X, a, 1) # a, [bsize, b, r, r] X = tf.concat([tf.squeeze(x, axis=1) for x in X], axis=2) # bsize, b, a*r, r X = tf.split(X, b, 1) # b, [bsize, a*r, r] X = tf.concat([tf.squeeze(x, axis=1) for x in X], axis=2) # bsize, a*r, b*r return tf.reshape(X, (bsize, a * r, b * r, 1)) def PixelShuffle(X, r, color=False): if color: Xc = tf.split(X, 3, 3) X = tf.concat([_phase_shift(x, r) for x in Xc], axis=3) else: X = _phase_shift(X, r) return X if __name__ == "__main__": X1 = tf.get_variable(name='X1', shape=[2, 8, 8, 4], initializer=tf.random_normal_initializer(stddev=1.0), dtype=tf.float32) Y = PixelShuffle(X1, 2) print(Y.shape) # (2, 16, 16, 1) X2 = tf.get_variable(name='X2', shape=[2, 8, 8, 4 * 3], initializer=tf.random_normal_initializer(stddev=1.0), dtype=tf.float32) Y2 = PixelShuffle(X2, 2, color=True) print(Y2.shape) # (2, 16, 16, 3)
pytorch方法實現
import torch import torch.nn as nn input = torch.randn(size=(1, 9, 4, 4)) ps = nn.PixelShuffle(3) output = ps(input) print(output.size()) # torch.Size([1, 1, 12, 12])
numpy方法實現
def PS(I, r): assert len(I.shape) == 3 assert r>0 r = int(r) O = np.zeros((I.shape[0]*r, I.shape[1]*r, I.shape[2]/(r*2))) for x in range(O.shape[0]): for y in range(O.shape[1]): for c in range(O.shape[2]): c += 1 a = np.floor(x/r).astype("int") b = np.floor(y/r).astype("int") d = c*r*(y%r) + c*(x%r) print a, b, d O[x, y, c-1] = I[a, b, d] return O
(batch_size, width, channels * r)-->(batch_size, width * r, channels)
tensorflow實現
import tensorflow as tf def SubPixel1D(I, r): """一維subpixel upsampling layer, 輸入維度(batch, width, r). """ with tf.name_scope('subpixel'): X = tf.transpose(I, [2, 1, 0]) # (r, w, b) X = tf.batch_to_space_nd(X, [r], [[0, 0]]) # (1, r*w, b) X = tf.transpose(X, [2, 1, 0]) return X # 示例 # --------------------------------------------------- if __name__ == "__main__": inputs = tf.get_variable(name='input', shape=[64, 8192, 32], initializer=tf.random_normal_initializer(stddev=1.0), dtype=tf.float32) upsample_SubPixel1D = SubPixel1D(I=inputs, r=2) print(upsample_SubPixel1D.shape) # (64, 16384, 16)
pytorch方法實現
class PixelShuffle1D(nn.Module): """ 1D pixel shuffler. https://arxiv.org/pdf/1609.05158.pdf Upscales sample length, downscales channel length "short" is input, "long" is output """ def __init__(self, upscale_factor): super(PixelShuffle1D, self).__init__() self.upscale_factor = upscale_factor def forward(self, x): batch_size, channels, in_width = x.size() channels //= self.upscale_factor out_width = self.upscale_factor * in_width x = x.contiguous().view([batch_size, channels, self.upscale_factor, in_width]) x = x.permute(0, 1, 3, 2).contiguous() x = x.view(batch_size, channels, out_width) return x
sub-pixel or fractional convolution能夠當作是transposed convolution的一個特例
能夠任意上採樣尺寸,還不是很出名,等於後出名了再來補全
這裏不少API我仍是分享的tensorflow 1.*的,主要緣由是由於我最開始學深度學習的時候用的是 tensoflow 1,如今我已經轉學pytorch了,今天看了看tensorflow,2版本已經發布一年多了,1版本至關因而爛尾了,2版本雖然解決了原來的問題,但是人是向前看的,我已經使用pytorch起來,再讓我回頭學tensorflow 2彷佛是一件很不情願的事情。並且tensorflow 2 已經在走向沒落了,使用tensorflow 2的開源代碼,除了google自家公司外,真的也愈來愈少。tensorflow加油吧,我心裏深處仍是喜歡你的,只不過pytorch太方便了,開源社區也很強大了。
【文檔】tensorflow官方文檔
【文檔】pytorch官方文檔
【代碼】2D_subpixel
【代碼】1D_pytorch-pixelshuffle1d
【動圖】卷積的動畫