醫學多模態圖像分割小結 - 知乎

在醫學圖象中,多模態數據因成像機理不一樣而能從多種層面提供信息。多模態圖像分割包含重點問題爲如何融合(fusion)不一樣模態間信息,本文主要記錄筆者最近所讀,歡迎批評指正補充

1. A review: Deep learning for medical image segmentation using multi-modality fusion (Array 2019)***

融合策略分類

綜述,按照方法的位置將融合策略分爲三大類:Input-level, Layer-level, Decision-level. spa

數據集:3d

幾種多模態數據集

2. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer (TMI 2019)****

Abstract : Layer-level fusion, U-net, PET-CT, Lung cancercode

Method : 對CT與PET各一個encoder,不一樣層feature stack後經conv獲得權重,與concat後的feature點積,得加權feature map。blog


Experiment : 肺癌數據,對比layer-level fusion : MB(multi-branch), MC(multi-channel), FS(fused),效果不錯。ci

3. 3D FULLY CONVOLUTIONAL NETWORKS FOR CO-SEGMENTATION OF TUMORS ON PET-CT IMAGES (ISBI 2018)**

Abstract : Decision-level, Unet, Graph cut, PET-CT, Lung cancerit

Method : CT, PET獨立的Net,各輸出機率圖後Graph Cut。io

Experiment : 肺癌數據,對比graph-cut。class

4. HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation (TMI 2019) ****

Abstract : Layer-level, DenseNet, MRI, Brainsed

Method : modality各一個net, 中間層相互dense鏈接。network

Experiment : Brain(iseg-2017, MRBrainS),對比layer level fusion : Single Dense Path, Dual Dense Path, Disentangled modalities with early fusion。各模態先經卷積再拼接相比直接雙通道輸入有較明顯提高。

5. Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers (MIDL 2019) *

Abstract : Input-level, Unet, Pet-CT, head and neck

Method : 2通道輸入

Experiment :頭頸部腫瘤, 對比單模態Unet,是HECKTOR比賽數據來源。

6. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network (Physics in Medicine & Biology 2019) **

Abstract : Layer-level, V-net, Pet-CT, Lung cancer

Method : 2個V-net 先分別提取PET/CT feature,sum後經4層卷積得result. 提出weighted cross entropy loss以balance不一樣模態影響。

Experiment : 肺癌數據,對比其它幾種fusion方法,傳統方法,單模態V-net。


歡迎評論與補充相關的論文~

相關文章
相關標籤/搜索