Skip to main content

GAN-Based Bi-Modal Segmentation Using Mumford-Shah Loss: Application to Head and Neck Tumors in PET-CT Images

  • Conference paper
  • First Online:
Head and Neck Tumor Segmentation (HECKTOR 2020)

Abstract

A deep model based on SegAN, a generative adversarial network (GAN) for medical image segmentation, is proposed for PET-CT image segmentation, utilizing the Mumford-Shah (MS) loss functional. An improved V-net is used for the generator network, while the discriminator network has a similar structure to the encoder part of the generator network. The improved polyphase V-net style network can help preserve boundary details unlike conventional V-net. A multi-term loss function consisting of MS loss and multi-scale mean absolute error (MAE) was designed for the training scheme. Using the complementary information extracted via MS loss helps improve supervised segmentation task by regularizing pixel/voxel similarities. MAE as the semantic term of loss function compensates for probable subdivisions into intra-tumor regions. The proposed method was applied for automatic segmentation of head and neck tumors and nodal metastases based on the bi-modal information from PET and CT images, which can be valuable for automated metabolic tumor volume measurements as well as radiomics analyses. The proposed bi-modal method was trained on 201 PET-CT images from four centers and was tested on 53 cases from a different center. The performance of our proposed method, independently evaluated in the HECKTOR challenge, achieved average Dice score coefficient (DSC) of 67%, precision of 73% and recall of 72%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lambin, P., et al.: Radiomics: extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 48(4), 441–446 (2012)

    Article  Google Scholar 

  2. Vallieres, M., et al.: Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci. Rep. 7(1), 1–14 (2017)

    Article  Google Scholar 

  3. Im, H.-J., et al.: Current methods to define metabolic tumor volume in positron emission tomography: which one is better? Nucl. Med. Mol. Imaging 52(1), 5–15 (2017). https://doi.org/10.1007/s13139-017-0493-6

    Article  Google Scholar 

  4. Andrearczyk, V., et al.: Automatic segmentation of head and neck tumors and nodal metastases in PET-CT scans. In: Medical Imaging with Deep Learning MIDL, Montreal (2020)

    Google Scholar 

  5. Starmans, M.P., et al.: Radiomics: data mining using quantitative medical image features. In: Handbook of Medical Image Computing and Computer Assisted Intervention, pp. 429–456. Elsevier (2020)

    Google Scholar 

  6. Gudi, S., et al.: Interobserver variability in the delineation of gross tumour volume and specified organs-at-risk during IMRT for head and neck cancers and the impact of FDG-PET/CT on such variability at the primary site. J. Med. Imaging Radiat. Sci. 48(2), 184–192 (2017)

    Article  Google Scholar 

  7. Jin, D., et al.: Accurate Esophageal Gross Tumor Volume segmentation in PET/CT using two-stream chained 3D deep network fusion. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 182–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_21

    Chapter  Google Scholar 

  8. Zhong, Z., et al.: Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med. Phys. 46(2), 619–633 (2019)

    Article  Google Scholar 

  9. Kumar, A., et al.: Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Trans. Med. Imaging 39(1), 204–217 (2019)

    Article  Google Scholar 

  10. Li, L., et al.: Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing 392, 277–295 (2019)

    Article  Google Scholar 

  11. Zhao, Y., et al.: Deep neural network for automatic characterization of lesions on 68 Ga-PSMA-11 PET/CT. Eur. J. Nucl. Med. Mol. Imaging 47(3), 603–613 (2020)

    Article  Google Scholar 

  12. Han, D., et al.: Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method. In: Székely, G., Hahn, H.K. (eds.) IPMI 2011. LNCS, vol. 6801, pp. 245–256. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22092-0_21

    Chapter  Google Scholar 

  13. Teramoto, A., et al.: Automated detection of pulmonary nodules in PET/CT images: ensemble false-positive reduction using a convolutional neural network technique. Med. Phys. 43(6Part1), 2821–2827 (2016)

    Google Scholar 

  14. Bi, L., et al.: Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Comput. Med. Imaging Graph. 60, 3–10 (2017)

    Article  Google Scholar 

  15. Zhao, X., et al.: Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys. Med. Biol. 64(1), 015011 (2018)

    Article  Google Scholar 

  16. Bradshaw, T., et al.: Deep learning for classification of benign and malignant bone lesions in [F-18] NaF PET/CT images. J. Nucl. Med. 59(supplement 1), 327–327 (2018)

    Google Scholar 

  17. Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108, 214–224 (2015)

    Article  Google Scholar 

  18. Moe, Y.M., et al.: Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers. arXiv preprint arXiv:1908.00841 (2019)

  19. Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). IEEE (2016)

    Google Scholar 

  20. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (2014)

    Google Scholar 

  21. Xue, Y., et al.: SegAN: adversarial network with multi-scale l 1 loss for medical image segmentation. Neuroinformatics 16(3–4), 383–392 (2018)

    Article  Google Scholar 

  22. Hung, W.-C., et al.: Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934 (2018)

  23. Kim, B., Ye, J.C.: Mumford-Shah loss functional for image segmentation with deep learning. IEEE Trans. Image Process. 29, 1856–1866 (2019)

    Article  MathSciNet  Google Scholar 

  24. Andrearczyk, V., et al.: Automatic head and neck tumor segmentation in PET/CT. In: MICCAI 2020 (2020)

    Google Scholar 

  25. Luc, P., et al.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016)

  26. Kim, B., Ye, J.C.: Cycle-consistent adversarial network with polyphase U-Nets for liver lesion segmentation (2018)

    Google Scholar 

  27. Ye, J.C., Han, Y., Cha, E.: Deep convolutional framelets: a general deep learning framework for inverse problems. SIAM J. Imaging Sci. 11(2), 991–1048 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  29. Gibson, E., et al.: NiftyNet: a deep-learning platform for medical imaging. Comput. Methods Programs Biomed. 158, 113–122 (2018)

    Article  Google Scholar 

  30. Weisman, A.J., et al.: Convolutional neural networks for automated PET/CT detection of diseased lymph node burden in patients with lymphoma. Radiol. Artif. Intell. 2(5), e200016 (2020)

    Google Scholar 

  31. Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 311–320. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_28

    Chapter  Google Scholar 

  32. Capobianco, N., et al.: Deep learning FDG uptake classification enables total metabolic tumor volume estimation in diffuse large B-cell lymphoma. J. Nucl. Med. (2020). p. jnumed. 120.242412

    Google Scholar 

  33. Zhu, J.-Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge Dr. Ivan Klyuzhin for his valuable support and feedback. This research was supported in part through computational resources and services provided by Microsoft and the Vice President Research and Innovation at the University of British Columbia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arman Rahmim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yousefirizi, F., Rahmim, A. (2021). GAN-Based Bi-Modal Segmentation Using Mumford-Shah Loss: Application to Head and Neck Tumors in PET-CT Images. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds) Head and Neck Tumor Segmentation. HECKTOR 2020. Lecture Notes in Computer Science(), vol 12603. Springer, Cham. https://doi.org/10.1007/978-3-030-67194-5_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67194-5_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67193-8

  • Online ISBN: 978-3-030-67194-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics