本週學習的是U-net神經網絡,據瞭解,U-net論文的做者是參加ISBI競賽取得效果後發表的文章供你們學習,論文原文連接:git
1 def conv_relu_layer(net,numfilters,name): 2 network = tf.layers.conv2d(net, 3 activation=tf.nn.relu, 4 filters= numfilters, 5 kernel_size=(3,3), 6 padding='Valid', 7 name= "{}_conv_relu".format(name)) 8 return network
二、maxpoolinggithub
1 def maxpool(net,name): 2 network = tf.layers.max_pooling2d(net, 3 pool_size= (2,2), 4 strides = (2,2), 5 padding = 'valid', 6 name = "{}_maxpool".format(name)) 7 return network
三、up-conv網絡
1 def up_conv(net,numfilters,name): 2 network = tf.layers.conv2d_transpose(net, 3 filters = numfilters, 4 kernel_size= (2,2), 5 strides= (2,2), 6 padding= 'valid', 7 activation= tf.nn.relu, 8 name = "{}_up_conv".format(name)) 9 return network
四、copy-prop架構
1 def copy_crop(skip_connect,net): 2 skip_connect_shape = skip_connect.get_shape() 3 net_shape = net.get_shape() 4 print(net_shape[1]) 5 size = [-1,net_shape[1].value,net_shape[2].value,-1] 6 skip_connect_crop = tf.slice(skip_connect,[0,0,0,0],size) 7 concat = tf.concat([skip_connect_crop,net],axis=3) 8 return concat
五、conv1*1app
def conv1x1(net,numfilters,name): return tf.layers.conv2d(net,filters=numfilters,strides=(1,1),kernel_size=(1,1),name = "{}_conv1x1".format(name),padding='SAME')
1 #define input data 2 input = tf.placeholder(dtype=tf.float32,shape = (64,572,572,3)) 3 4 5 #define downsample path 6 network = conv_relu_layer(input,numfilters=64,name='lev1_layer1') 7 skip_con1 = conv_relu_layer(network,numfilters=64,name='lev1_layer2') 8 network = maxpool(skip_con1,'lev2_layer1') 9 network = conv_relu_layer(network,128,'lev2_layer2') 10 skip_con2 = conv_relu_layer(network,128,'lev2_layer3') 11 network = maxpool(skip_con2,'lev3_layer1') 12 network = conv_relu_layer(network,256,'lev3_layer1') 13 skip_con3 = conv_relu_layer(network,256,'lev3_layer2') 14 network = maxpool(skip_con3,'lev4_layer1') 15 network = conv_relu_layer(network,512,'lev4_layer2') 16 skip_con4 = conv_relu_layer(network,512,'lev4_layer3') 17 network = maxpool(skip_con4,'lev5_layer1') 18 network = conv_relu_layer(network,1024,'lev5_layer2') 19 network = conv_relu_layer(network,1024,'lev5_layer3') 20 21 #define upsample path 22 network = up_conv(network,512,'lev6_layer1') 23 network = copy_crop(skip_con4,network) 24 network = conv_relu_layer(network,numfilters=512,name='lev6_layer2') 25 network = conv_relu_layer(network,numfilters=512,name='lev6_layer3') 26 27 network = up_conv(network,256,name='lev7_layer1') 28 network = copy_crop(skip_con3,network) 29 network = conv_relu_layer(network,256,name='lev7_layer2') 30 network = conv_relu_layer(network,256,'lev7_layer3') 31 32 33 network = up_conv(network,128,name='lev8_layer1') 34 network = copy_crop(skip_con2,network) 35 network = conv_relu_layer(network,128,name='lev8_layer2') 36 network = conv_relu_layer(network,128,'lev8_layer3') 37 38 39 network = up_conv(network,64,name='lev9_layer1') 40 network = copy_crop(skip_con1,network) 41 network = conv_relu_layer(network,64,name='lev9_layer2') 42 network = conv_relu_layer(network,64,name='lev9_layer3') 43 network = conv1x1(network,2,name='lev9_layer4')
利用TensorFlow實現的U-net架構以下ide
(圖片來源於百度)函數
學習過程當中碰到一位GitHub大佬發表的項目,連接以下: 性能
運行test_predict.py便可將模型預測獲得的結果進行可視化學習
最後data_version.py能夠將測試集及其結果集保存爲指定的圖片格式以及指定的路徑測試
注意程序最好在Ubuntu上跑,Windows中要進行不少改動
U-net的本質仍是從cnn上改寫過來的,以前對cnn瞭解並非很深刻,此次特別對cnn作了一些瞭解
與此同時經過查詢資料瞭解到cnn的強大之處在於
一、較淺的卷積層感知域較小,容易學習到一些局部區域的特徵
二、較深的卷積層感知域較大,可以學習到一些更加抽象的特徵
這些抽象特徵對物體的大小、位置和方向等敏感性更低,從而有助於分類性能的提升
而對u-net的根本認識仍是要基於FCN啊,日後幾天不打算日後進行什麼,主要仍是紮實基礎
以上就是本週進度了,加油!