Skip to main content

Improving PET-CT Image Segmentation via Deep Multi-modality Data Augmentation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12450))

Abstract

Positron emission tomography (PET) - computed tomography (CT) is a widely-accepted imaging modality for staging, diagnosis and treatment response monitoring of cancers. Deep learning based computer aided diagnosis systems have achieved high accuracy on tumor segmentation on PET-CT images in recent years. PET images can be used to detect functional structures such as tumors, whilst CT images provide complementary anatomical information. As for tumor detection using deep learning methods, multi-modality segmentation was verified to be effective. In this work, we propose a generative adversarial network (GAN) based augmentation method to synthesized multi-modality data pairs on PET and CT to improve the training of multi-modality segmentation method. Our novelty lies in creating a semantic label augmentation method to provide latent information that is suitable for the multi-modality synthesis. In addition, we set out a ‘Split U’ structure which can generate both PET-CT modalities from a latent input. Our experimental results demonstrated that the synthesized images generated by our method can be used to augment the training data for PET-CT segmentation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Kratochwil, C., Haberkorn, U., Giesel, F.L.: PET/CT for diagnostics and therapy stratification of lung cancer. Der Radiologe 50(8), 684–691 (2010)

    Article  Google Scholar 

  2. Verma, B., Zakos, J.: A computer-aided diagnosis system for digital mammograms based on fuzzy-neural and feature extraction techniques. IEEE Trans. Inf Technol. Biomed. 5(1), 46–54 (2001)

    Article  Google Scholar 

  3. Fan, J.-L., Zhao, F.: Two-dimensional Otsu’s curve thresholding segmentation method for gray-level images. Acta Electronica Sinica 35(4), 751 (2007)

    Google Scholar 

  4. Tang, J.: A color image segmentation algorithm based on region growing. In: 2010 2nd International Conference on Computer Engineering and Technology. IEEE (2010)

    Google Scholar 

  5. Hu, G.: Survey of recent volumetric medical image segmentation techniques. In: Biomedical Engineering. IntechOpen (2009)

    Google Scholar 

  6. Ker, J., et al.: Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2018)

    Article  Google Scholar 

  7. Mikołajczyk, A., Grochowski, M.: Data augmentation for improving deep learning in image classification problem. In: 2018 International Interdisciplinary PhD workshop (IIPhDW). IEEE (2018)

    Google Scholar 

  8. Wang, J., Perez, L.: The effectiveness of data augmentation in image classification using deep learning. Convolutional Neural Netw. Vis. Recogn. 11, 1–8 (2017)

    Google Scholar 

  9. Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M.: Synthesis of positron emission tomography (PET) Images via multi-channel generative adversarial networks (GANs). In: Cardoso, M.J., et al. (eds.) CMMI/SWITCH/RAMBO -2017. LNCS, vol. 10555, pp. 43–51. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67564-0_5

    Chapter  Google Scholar 

  10. Peng, Y., et al. Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (2019)

    Google Scholar 

  11. Pisano, E.D., et al.: Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 11(4), 193 (1998). https://doi.org/10.1007/BF03178082

    Article  Google Scholar 

  12. Pizer, S.M., et al.: Contrast-limited adaptive histogram equalization: speed and effectiveness. In: Proceedings of the First Conference on Visualization in Biomedical Computing (1990)

    Google Scholar 

  13. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Sign. Process. Syst. Sign. Image Video Technol. 38(1), 35–44 (2004). https://doi.org/10.1023/B:VLSI.0000028532.53893.82

    Article  Google Scholar 

  14. Um, T.T., et al.: Data augmentation of wearable sensor data for Parkinson’s disease monitoring using convolutional neural networks. arXiv preprint arXiv:1706.00527 (2017)

  15. Taylor, L., Nitschke, G.: Improving deep learning using generic data augmentation. arXiv preprint arXiv:1708.06020 (2017)

  16. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (2014)

    Google Scholar 

  17. Panchapagesan, S., et al.: Multi-task learning and weighted cross-entropy for DNN-based keyword spotting. In: INTERSPEECH (2016)

    Google Scholar 

  18. Vallières, M., et al.: A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities. Phys. Med. Biol. 60(14), 5471 (2015)

    Article  Google Scholar 

  19. Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013). https://doi.org/10.1007/s10278-013-9622-7

    Article  Google Scholar 

  20. Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  21. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 2010 20th International Conference on Pattern Recognition. IEEE (2010)

    Google Scholar 

  22. Kumar, A., et al.: Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Trans. Med. Imaging 39(1), 204–217 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Bi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cao, K., Bi, L., Feng, D., Kim, J. (2020). Improving PET-CT Image Segmentation via Deep Multi-modality Data Augmentation. In: Deeba, F., Johnson, P., Würfl, T., Ye, J.C. (eds) Machine Learning for Medical Image Reconstruction. MLMIR 2020. Lecture Notes in Computer Science(), vol 12450. Springer, Cham. https://doi.org/10.1007/978-3-030-61598-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61598-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61597-0

  • Online ISBN: 978-3-030-61598-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics