Skip to main content
Erschienen in: International Journal of Computer Assisted Radiology and Surgery 10/2019

05.08.2019 | Original Article

Synthesis of CT images from digital body phantoms using CycleGAN

verfasst von: Tom Russ, Stephan Goerttler, Alena-Kathrin Schnurr, Dominik F. Bauer, Sepideh Hatamikia, Lothar R. Schad, Frank G. Zöllner, Khanlian Chung

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 10/2019

Einloggen, um Zugang zu erhalten

Abstract

Purpose

The potential of medical image analysis with neural networks is limited by the restricted availability of extensive data sets. The incorporation of synthetic training data is one approach to bypass this shortcoming, as synthetic data offer accurate annotations and unlimited data size.

Methods

We evaluated eleven CycleGAN for the synthesis of computed tomography (CT) images based on XCAT body phantoms. The image quality was assessed in terms of anatomical accuracy and realistic noise properties. We performed two studies exploring various network and training configurations as well as a task-based adaption of the corresponding loss function.

Results

The CycleGAN using the Res-Net architecture and three XCAT input slices achieved the best overall performance in the configuration study. In the task-based study, the anatomical accuracy of the generated synthetic CTs remained high (\(\mathrm{SSIM} = 0.64\) and \(\mathrm{FSIM} = 0.76\)). At the same time, the generated noise texture was close to real data with a noise power spectrum correlation coefficient of \(\mathrm{NCC} = 0.92\). Simultaneously, we observed an improvement in annotation accuracy of 65% when using the dedicated loss function. The feasibility of a combined training on both real and synthetic data was demonstrated in a blood vessel segmentation task (dice similarity coefficient \(\mathrm {DSC}=0.83\pm 0.05\)).

Conclusion

CT synthesis using CycleGAN is a feasible approach to generate realistic images from simulated XCAT phantoms. Synthetic CTs generated with a task-based loss function can be used in addition to real data to improve the performance of segmentation networks.
Literatur
1.
Zurück zum Zitat Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA (2018) Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE 10574, medical imaging 2018: image processing, vol 105741L Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA (2018) Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE 10574, medical imaging 2018: image processing, vol 105741L
2.
Zurück zum Zitat Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698CrossRef Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698CrossRef
3.
Zurück zum Zitat Chen L, Jiang F, Zhang H, Wu S, Yu S, Xie Y (2016) Edge preservation ratio for image sharpness assessment. In: 2016 12th World congress on intelligent control and automation (WCICA), IEEE, pp 1377–1381 Chen L, Jiang F, Zhang H, Wu S, Yu S, Xie Y (2016) Edge preservation ratio for image sharpness assessment. In: 2016 12th World congress on intelligent control and automation (WCICA), IEEE, pp 1377–1381
5.
Zurück zum Zitat Costa P, Galdran A, Meyer MI, Niemeijer M, Abrámoff M, Mendonça AM, Campilho A (2018) End-to-end adversarial retinal image synthesis. IEEE Trans Med Imaging 37(3):781–791CrossRef Costa P, Galdran A, Meyer MI, Niemeijer M, Abrámoff M, Mendonça AM, Campilho A (2018) End-to-end adversarial retinal image synthesis. IEEE Trans Med Imaging 37(3):781–791CrossRef
6.
8.
Zurück zum Zitat Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42(2012):60–88CrossRef Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42(2012):60–88CrossRef
9.
Zurück zum Zitat Lundervold AS, Lundervold A (2019) An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik 29(2):102–127CrossRef Lundervold AS, Lundervold A (2019) An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik 29(2):102–127CrossRef
10.
Zurück zum Zitat Maier J, Sawall S, Knaup M, Kachelrieß M (2018) Deep scatter estimation (DSE): accurate real-time scatter estimation for X-ray CT using a deep convolutional neural network. J Nondestruct Eval 37(3):1–9CrossRef Maier J, Sawall S, Knaup M, Kachelrieß M (2018) Deep scatter estimation (DSE): accurate real-time scatter estimation for X-ray CT using a deep convolutional neural network. J Nondestruct Eval 37(3):1–9CrossRef
12.
Zurück zum Zitat Olut S, Sahin YH, Demir U, Unal G (2018) Generative adversarial training for MRA image synthesis using multi-contrast MRI. In: PRedictive intelligence in MEdicine, pp 147–154CrossRef Olut S, Sahin YH, Demir U, Unal G (2018) Generative adversarial training for MRA image synthesis using multi-contrast MRI. In: PRedictive intelligence in MEdicine, pp 147–154CrossRef
13.
Zurück zum Zitat Rührnschopf EP, Klingenbeck K (2011) A general framework and review of scatter correction methods in cone beam CT. Part 2: scatter estimation approaches. Med Phys 38(9):5186–5199CrossRef Rührnschopf EP, Klingenbeck K (2011) A general framework and review of scatter correction methods in cone beam CT. Part 2: scatter estimation approaches. Med Phys 38(9):5186–5199CrossRef
14.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252CrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252CrossRef
15.
Zurück zum Zitat Schnurr AK, Chung K, Russ T, Schad LR, Zöllner FG (2019) Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts. Zeitschrift für Medizinische Physik 29(2):150–161CrossRef Schnurr AK, Chung K, Russ T, Schad LR, Zöllner FG (2019) Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts. Zeitschrift für Medizinische Physik 29(2):150–161CrossRef
16.
Zurück zum Zitat Schnurr AK, Schad LR, Zöllner FG (2019) Sparsely connected convolutional layers in CNNs for liver segmentation in CT. In: Bildverarbeitung für die Medizin 2019, Springer, New York, pp 80–85 Schnurr AK, Schad LR, Zöllner FG (2019) Sparsely connected convolutional layers in CNNs for liver segmentation in CT. In: Bildverarbeitung für die Medizin 2019, Springer, New York, pp 80–85
17.
Zurück zum Zitat Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BMW (2010) 4D XCAT Phantom for multimodality imaging research. Med Phys 37(9):4902–4915CrossRef Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BMW (2010) 4D XCAT Phantom for multimodality imaging research. Med Phys 37(9):4902–4915CrossRef
18.
Zurück zum Zitat Sharp P, Barber DC, Brown DG, Burgess AE, Metz CE, Myers KJ, Taylor CJ, Wagner RF, Brooks R, Hill CR, Kuhl DE, Smith MA, Wells P, Worthington B (1996) Report 54. J Int Comm Radiat Units Meas Sharp P, Barber DC, Brown DG, Burgess AE, Metz CE, Myers KJ, Taylor CJ, Wagner RF, Brooks R, Hill CR, Kuhl DE, Smith MA, Wells P, Worthington B (1996) Report 54. J Int Comm Radiat Units Meas
19.
Zurück zum Zitat Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R (2017) Learning from simulated and unsupervised images through adversarial training. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 2242–2251 Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R (2017) Learning from simulated and unsupervised images through adversarial training. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 2242–2251
21.
Zurück zum Zitat Walek P, Jan J, Ourednicek P, Skotakova J, Jira I (2013) Methodology for estimation of tissue noise power spectra in iteratively reconstructed MDCT data. In: 21st International conference on computer graphics, visualization and computer vision, pp 243–252 Walek P, Jan J, Ourednicek P, Skotakova J, Jira I (2013) Methodology for estimation of tissue noise power spectra in iteratively reconstructed MDCT data. In: 21st International conference on computer graphics, visualization and computer vision, pp 243–252
22.
Zurück zum Zitat Wang Z, Bovik AC, Sheikh HR (2004) Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Proces 13(4):600–612CrossRef Wang Z, Bovik AC, Sheikh HR (2004) Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Proces 13(4):600–612CrossRef
23.
Zurück zum Zitat Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, Huang TS (2015) DeepFont: identify your font from an image. In: Proceedings of the 23rd ACM international conference on multimedia, MM’15, pp 451–459 Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, Huang TS (2015) DeepFont: identify your font from an image. In: Proceedings of the 23rd ACM international conference on multimedia, MM’15, pp 451–459
24.
Zurück zum Zitat Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: Simulation and synthesis in medical imaging, pp 14–23CrossRef Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: Simulation and synthesis in medical imaging, pp 14–23CrossRef
25.
Zurück zum Zitat Wood E, Baltrušaitis T, Morency LP, Robinson P, Bulling A (2016) Learning an appearance-based Gaze estimator from one million synthesised images. In: Proceedings of the ninth biennial ACM symposium on eye tracking research and applications—ETRA ’16, New York, pp 131–138 Wood E, Baltrušaitis T, Morency LP, Robinson P, Bulling A (2016) Learning an appearance-based Gaze estimator from one million synthesised images. In: Proceedings of the ninth biennial ACM symposium on eye tracking research and applications—ETRA ’16, New York, pp 131–138
26.
Zurück zum Zitat Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Proces 20(8):2378–2386CrossRef Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Proces 20(8):2378–2386CrossRef
27.
Zurück zum Zitat Zhu J, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International conference on computer vision (ICCV), IEEE, pp 2242–2251 Zhu J, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International conference on computer vision (ICCV), IEEE, pp 2242–2251
Metadaten
Titel
Synthesis of CT images from digital body phantoms using CycleGAN
verfasst von
Tom Russ
Stephan Goerttler
Alena-Kathrin Schnurr
Dominik F. Bauer
Sepideh Hatamikia
Lothar R. Schad
Frank G. Zöllner
Khanlian Chung
Publikationsdatum
05.08.2019
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 10/2019
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-019-02042-9

Weitere Artikel der Ausgabe 10/2019

International Journal of Computer Assisted Radiology and Surgery 10/2019 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.