Skip to main content
Erschienen in:

28.03.2020 | Original Article

CTumorGAN: a unified framework for automatic computed tomography tumor segmentation

verfasst von: Shuchao Pang, Anan Du, Mehmet A. Orgun, Zhenmei Yu, Yunyun Wang, Yan Wang, Guanfeng Liu

Erschienen in: European Journal of Nuclear Medicine and Molecular Imaging | Ausgabe 10/2020

Einloggen, um Zugang zu erhalten

Abstract

Purpose

Unlike the normal organ segmentation task, automatic tumor segmentation is a more challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution, as well as the diversity and individual characteristics of data acquisition procedures and devices. Consequently, most of the recently proposed methods have become increasingly difficult to be applied on a different tumor dataset with good results, and moreover, some tumor segmentors usually fail to generalize beyond those datasets and modalities used in their original evaluation experiments.

Methods

In order to alleviate some of the problems with the recently proposed methods, we propose a novel unified and end-to-end adversarial learning framework for automatic segmentation of any kinds of tumors from CT scans, called CTumorGAN, consisting of a Generator network and a Discriminator network. Specifically, the Generator attempts to generate segmentation results that are close to their corresponding golden standards, while the Discriminator aims to distinguish between generated samples and real tumor ground truths. More importantly, we deliberately design different modules to take into account the well-known obstacles, e.g., severe class imbalance, small tumor localization, and the label noise problem with poor expert annotation quality, and then use these modules to guide the CTumorGAN training process by utilizing multi-level supervision more effectively.

Results

We conduct a comprehensive evaluation on diverse loss functions for tumor segmentation and find that mean square error is more suitable for the CT tumor segmentation task. Furthermore, extensive experiments with multiple evaluation criteria on three well-established datasets, including lung tumor, kidney tumor, and liver tumor databases, also demonstrate that our CTumorGAN achieves stable and competitive performance compared with the state-of-the-art approaches for CT tumor segmentation.

Conclusion

In order to overcome those key challenges arising from CT datasets and solve some of the main problems existing in the current deep learning-based methods, we propose a novel unified CTumorGAN framework, which can be effectively generalized to address any kinds of tumor datasets with superior performance.
Literatur
1.
Zurück zum Zitat Siegel RL, Miller KD, Jemal A. Cancer statistics. CA Cancer J Clin. 2019;69:7–34.CrossRef Siegel RL, Miller KD, Jemal A. Cancer statistics. CA Cancer J Clin. 2019;69:7–34.CrossRef
2.
Zurück zum Zitat Wenzel M, Milletari F, Krüger J, Lange C, Schenk M, Apostolova I, et al. Automatic classification of dopamine transporter SPECT: deep convolutional neural networks can be trained to be robust with respect to variable image characteristics. Eur J Nucl Med Mol Imaging. 2019:1–12. Wenzel M, Milletari F, Krüger J, Lange C, Schenk M, Apostolova I, et al. Automatic classification of dopamine transporter SPECT: deep convolutional neural networks can be trained to be robust with respect to variable image characteristics. Eur J Nucl Med Mol Imaging. 2019:1–12.
3.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44 1038/nature14539.CrossRef LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44 1038/nature14539.CrossRef
4.
Zurück zum Zitat Isensee F, Petersen J, Kohl SA, Jäger PF, Maier-Hein KH. nnU-Net: breaking the spell on successful medical image segmentation. arXiv preprint arXiv:1904.08128. 2019. Isensee F, Petersen J, Kohl SA, Jäger PF, Maier-Hein KH. nnU-Net: breaking the spell on successful medical image segmentation. arXiv preprint arXiv:1904.08128. 2019.
5.
Zurück zum Zitat Mohammadi A, Afshar P, Asif A, Farahani K, Kirby J, Oikonomou A, et al. Lung cancer radiomics: highlights from the IEEE video and image processing cup 2018 student competition [SP competitions]. IEEE Signal Process Mag. 2018;36:164–73.CrossRef Mohammadi A, Afshar P, Asif A, Farahani K, Kirby J, Oikonomou A, et al. Lung cancer radiomics: highlights from the IEEE video and image processing cup 2018 student competition [SP competitions]. IEEE Signal Process Mag. 2018;36:164–73.CrossRef
8.
Zurück zum Zitat Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, Rosenberg J, Blake P, Rengel Z, Oestreich M, Dean J. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv preprint arXiv:1904.00445. 2019. Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, Rosenberg J, Blake P, Rengel Z, Oestreich M, Dean J. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv preprint arXiv:1904.00445. 2019.
10.
Zurück zum Zitat Bilic P, Christ PF, Vorontsov E, Chlebus G, Chen H, Dou Q, Fu CW, Han X, Heng PA, Hesser J, Kadoury S. The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056. 2019. Bilic P, Christ PF, Vorontsov E, Chlebus G, Chen H, Dou Q, Fu CW, Han X, Heng PA, Hesser J, Kadoury S. The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056. 2019.
11.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27; 2014. p. 2672–80. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27; 2014. p. 2672–80.
17.
Zurück zum Zitat Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H. Generative adversarial text to image synthesis. In: International Conference on Machine Learning; 2016. p. 1060–9. Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H. Generative adversarial text to image synthesis. In: International Conference on Machine Learning; 2016. p. 1060–9.
18.
Zurück zum Zitat Vorontsov E, Tang A, Pal C, Kadoury S. Liver lesion segmentation informed by joint liver segmentation. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018): IEEE; 2018. p. 1332–5. Vorontsov E, Tang A, Pal C, Kadoury S. Liver lesion segmentation informed by joint liver segmentation. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018): IEEE; 2018. p. 1332–5.
21.
Zurück zum Zitat Zhao A, Balakrishnan G, Durand F, Guttag JV, Dalca AV. Data augmentation using learned transformations for one-shot medical image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019. p. 8543–53. Zhao A, Balakrishnan G, Durand F, Guttag JV, Dalca AV. Data augmentation using learned transformations for one-shot medical image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019. p. 8543–53.
27.
Zurück zum Zitat Zhu X, Rangayyan RM. Detection of the optic disc in images of the retina using the Hough transform. In: 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society: IEEE; 2008. p. 3546–9. Zhu X, Rangayyan RM. Detection of the optic disc in images of the retina using the Hough transform. In: 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society: IEEE; 2008. p. 3546–9.
31.
Zurück zum Zitat Mihaylova A, Georgieva V. Spleen segmentation in MRI sequence images using template matching and active contours. Procedia Comput Sci. 2018;131:15–22.CrossRef Mihaylova A, Georgieva V. Spleen segmentation in MRI sequence images using template matching and active contours. Procedia Comput Sci. 2018;131:15–22.CrossRef
32.
Zurück zum Zitat Boykov YY, Jolly MP. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: Proceedings eighth IEEE International Conference on Computer Vision, vol. 1; 2001. p. 105–12. Boykov YY, Jolly MP. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: Proceedings eighth IEEE International Conference on Computer Vision, vol. 1; 2001. p. 105–12.
33.
Zurück zum Zitat Boykov Y, Funka-Lea G. Graph cuts and efficient ND image segmentation. Int J Comput Vis. 2006;70:109–31.CrossRef Boykov Y, Funka-Lea G. Graph cuts and efficient ND image segmentation. Int J Comput Vis. 2006;70:109–31.CrossRef
36.
Zurück zum Zitat Lee CH, Schmidt M, Murtha A, Bistritz A, Sander J, Greiner R. Segmenting brain tumors with random fields and support vector machines. In: In International Workshop on Computer Vision for Biomedical Image Applications, vol. 3765; 2005. p. 469–78.CrossRef Lee CH, Schmidt M, Murtha A, Bistritz A, Sander J, Greiner R. Segmenting brain tumors with random fields and support vector machines. In: In International Workshop on Computer Vision for Biomedical Image Applications, vol. 3765; 2005. p. 469–78.CrossRef
37.
Zurück zum Zitat Christ PF, Elshaer MEA, Ettlinger F, Tatavarty S, Bickel M, Bilic P, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer; 2016. p. 415–23. https://doi.org/10.1007/978-3-319-46723-8_48.CrossRef Christ PF, Elshaer MEA, Ettlinger F, Tatavarty S, Bickel M, Bilic P, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer; 2016. p. 415–23. https://​doi.​org/​10.​1007/​978-3-319-46723-8_​48.CrossRef
38.
Zurück zum Zitat Ng HP, Ong SH, Foong KWC, Goh PS, Nowinski WL. Medical image segmentation using k-means clustering and improved watershed algorithm. In 2006 IEEE Southwest Symposium on Image Analysis and Interpretation. IEEE; 2006. P. 61–65. Ng HP, Ong SH, Foong KWC, Goh PS, Nowinski WL. Medical image segmentation using k-means clustering and improved watershed algorithm. In 2006 IEEE Southwest Symposium on Image Analysis and Interpretation. IEEE; 2006. P. 61–65.
39.
Zurück zum Zitat Kanimozhi M, Bindu CH. Brain MR image segmentation using self organizing map. Brain. 2013;2. Kanimozhi M, Bindu CH. Brain MR image segmentation using self organizing map. Brain. 2013;2.
42.
Zurück zum Zitat Cobzas D, Birkbeck N, Schmidt M, Jagersand M, Murtha A. 3D variational brain tumor segmentation using a high dimensional feature set. In 2007 IEEE 11th International Conference on Computer Vision. IEEE; 2007. P. 1–8. Cobzas D, Birkbeck N, Schmidt M, Jagersand M, Murtha A. 3D variational brain tumor segmentation using a high dimensional feature set. In 2007 IEEE 11th International Conference on Computer Vision. IEEE; 2007. P. 1–8.
46.
Zurück zum Zitat Ciresan D, Giusti A, Gambardella LM, Schmidhuber J. Deep neural networks segment neuronal membranes in electron microscopy images. In: Advances in Neural Information Processing Systems; 2012. p. 2843–51. Ciresan D, Giusti A, Gambardella LM, Schmidhuber J. Deep neural networks segment neuronal membranes in electron microscopy images. In: Advances in Neural Information Processing Systems; 2012. p. 2843–51.
53.
Zurück zum Zitat Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, Glocker B. Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. 2018. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, Glocker B. Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. 2018.
57.
Zurück zum Zitat Kirienko M, Sollini M, Silvestri G, Mognetti S, Voulaz E, Antunovic L, et al. Convolutional neural networks promising in lung cancer T-parameter assessment on baseline FDG-PET/CT. Contrast Media Mol Imaging. 2018;2018. https://doi.org/10.1155/2018/1382309. Kirienko M, Sollini M, Silvestri G, Mognetti S, Voulaz E, Antunovic L, et al. Convolutional neural networks promising in lung cancer T-parameter assessment on baseline FDG-PET/CT. Contrast Media Mol Imaging. 2018;2018. https://​doi.​org/​10.​1155/​2018/​1382309.
Metadaten
Titel
CTumorGAN: a unified framework for automatic computed tomography tumor segmentation
verfasst von
Shuchao Pang
Anan Du
Mehmet A. Orgun
Zhenmei Yu
Yunyun Wang
Yan Wang
Guanfeng Liu
Publikationsdatum
28.03.2020
Verlag
Springer Berlin Heidelberg
Erschienen in
European Journal of Nuclear Medicine and Molecular Imaging / Ausgabe 10/2020
Print ISSN: 1619-7070
Elektronische ISSN: 1619-7089
DOI
https://doi.org/10.1007/s00259-020-04781-3