Erschienen in:
22.01.2020 | Original Article
Deep multi-scale feature fusion for pancreas segmentation from CT images
verfasst von:
Zhanlan Chen, Xiuying Wang, Ke Yan, Jiangbin Zheng
Erschienen in:
International Journal of Computer Assisted Radiology and Surgery
|
Ausgabe 3/2020
Einloggen, um Zugang zu erhalten
Abstract
Purpose
Pancreas segmentation from computed tomography (CT) images is an important step in surgical procedures such as cancer detection and radiation treatment. While manual segmentation is time-consuming and operator-dependent, current computer-assisted segmentation methods are facing challenges posed by varying shapes and sizes. To address these challenges, this paper presents a multi-scale feature fusion (MsFF) model for accurate pancreas segmentation from CT images.
Methods
The proposed MsFF is built upon the well-recognized encoder–decoder framework. Firstly, in the encoder stage, the squeeze-and-excitation module is incorporated to enhance the learning of features by exploiting channel-wise independence. Secondly, a hierarchical fusion module is introduced to better utilize both low-level and high-level features to retain boundary information and make final predictions.
Results
The proposed MsFF is evaluated on the NIH pancreas dataset and outperforms the current state-of-the-art methods, by achieving a mean of 87.26% and 22.67% under the Dice Sorensen Coefficient and Volumetric Overlap Error, respectively.
Conclusion
The experimental results confirm that the incorporation of squeeze-and-excitation and hierarchical fusion modules contributes to a net gain in the performance of our proposed MsFF.