Skip to main content
Erschienen in: Radiological Physics and Technology 3/2019

20.06.2019

Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging

verfasst von: Shizuo Kaji, Satoshi Kida

Erschienen in: Radiological Physics and Technology | Ausgabe 3/2019

Einloggen, um Zugang zu erhalten

Abstract

Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. Every year, many new methods are reported at conferences such as the International Conference on Medical Image Computing and Computer-Assisted Intervention and Machine Learning for Medical Image Reconstruction, or published online at the preprint server arXiv. There is a plethora of surveys on applications of neural networks in medical imaging (see [1] for a relatively recent comprehensive survey). This article does not aim to cover all aspects of the field, but focuses on a particular topic, image-to-image translation. Although the topic may not sound familiar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform viewpoint. We introduce core ideas and jargon that are specific to image processing by use of DNNs. Having an intuitive grasp of the core ideas of applications of neural networks in medical imaging and a knowledge of technical terms would be of great help to the reader for understanding the existing and future applications. Most of the recent applications which build on image-to-image translation are based on one of two fundamental architectures, called pix2pix and CycleGAN, depending on whether the available training data are paired or unpaired (see Sect. 1.3). We provide codes ([2, 3]) which implement these two architectures with various enhancements. Our codes are available online with use of the very permissive MIT license. We provide a hands-on tutorial for training a model for denoising based on our codes (see Sect. 6). We hope that this article, together with the codes, will provide both an overview and the details of the key algorithms and that it will serve as a basis for the development of new applications.
Fußnoten
1
Various image filters are implemented in the free software Fiji [7], and we can easily try them out to see their characteristics.
 
2
Usually, a linear layer involves the constant term as well, so that it has the form \(x \mapsto Ax + b\) for \(b\in {\mathbb {R}}^m\). The term b is often referred to as the bias.
 
3
Around the same time, very similar architectures such as UNIT, DiscoGAN, and DualGAN were introduced. Walender et al. [24] evaluated UNIT [25] and CycleGAN for transformation between T1- and T2-weighted MRI images and showed that these two frameworks performed almost equally well.
 
Literatur
1.
Zurück zum Zitat Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med phys. 2018;46:e1–36.CrossRefPubMed Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med phys. 2018;46:e1–36.CrossRefPubMed
4.
Zurück zum Zitat Lu L, Zheng Y, Carneiro G, Yang L, editors. Deep learning and convolutional neural networks for medical image computing—precision medicine, high performance and large-scale datasets. Advances in computer vision and pattern recognition. Springer; 2017. Lu L, Zheng Y, Carneiro G, Yang L, editors. Deep learning and convolutional neural networks for medical image computing—precision medicine, high performance and large-scale datasets. Advances in computer vision and pattern recognition. Springer; 2017.
5.
Zurück zum Zitat Knoll F, Maier AK, Rueckert D. editors. Machine learning for medical image reconstruction—first international workshop, MLMIR 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings, vol. 11074 of Lecture Notes in Computer Science, Springer; 2018. Knoll F, Maier AK, Rueckert D. editors. Machine learning for medical image reconstruction—first international workshop, MLMIR 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings, vol. 11074 of Lecture Notes in Computer Science, Springer; 2018.
6.
Zurück zum Zitat Litjens GJS, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak J, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.CrossRefPubMed Litjens GJS, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak J, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.CrossRefPubMed
7.
Zurück zum Zitat Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez J-Y, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A. Fiji: an open-source platform for biological-image analysis. Nat Meth. 2012;9:676–82.CrossRef Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez J-Y, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A. Fiji: an open-source platform for biological-image analysis. Nat Meth. 2012;9:676–82.CrossRef
8.
Zurück zum Zitat Nielsen MA. Neural networks and deep learning. Determination Press; 2018. Nielsen MA. Neural networks and deep learning. Determination Press; 2018.
10.
Zurück zum Zitat Safran I, Shamir O. Depth-width tradeoffs in approximating natural functions with neural networks. In: Proceedings of the 34th international conference on machine learning, Vol 70, ICML’17, p. 2979–87, JMLR.org; 2017. Safran I, Shamir O. Depth-width tradeoffs in approximating natural functions with neural networks. In: Proceedings of the 34th international conference on machine learning, Vol 70, ICML’17, p. 2979–87, JMLR.org; 2017.
11.
Zurück zum Zitat Scarselli F, Tsoi AC. Universal approximation using feedforward neural networks: a survey of some existing methods, and some new results. Neural Netw. 1998;11:15–37.CrossRefPubMed Scarselli F, Tsoi AC. Universal approximation using feedforward neural networks: a survey of some existing methods, and some new results. Neural Netw. 1998;11:15–37.CrossRefPubMed
12.
Zurück zum Zitat Lu Z, Pu H, Wang F, Hu Z, Wang L. The expressive power of neural networks: a view from the width. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red HookRed Hook: Curran Associates, Inc.; 2017. p. 6231–9. Lu Z, Pu H, Wang F, Hu Z, Wang L. The expressive power of neural networks: a view from the width. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red HookRed Hook: Curran Associates, Inc.; 2017. p. 6231–9.
14.
Zurück zum Zitat Shi W, Caballero J, Huszar F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016; 2016. p. 1874–1883. Shi W, Caballero J, Huszar F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016; 2016. p. 1874–1883.
15.
Zurück zum Zitat Odena A, Dumoulin V, Olah C. Deconvolution and checkerboard artifacts. Distill; 2016. Odena A, Dumoulin V, Olah C. Deconvolution and checkerboard artifacts. Distill; 2016.
16.
Zurück zum Zitat Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach FR, Blei DM (eds) ICML of JMLR workshop and conference proceedings, vol 37. USA: JMLR.org; 2015. p. 448–56. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach FR, Blei DM (eds) ICML of JMLR workshop and conference proceedings, vol 37. USA: JMLR.org; 2015. p. 448–56.
17.
Zurück zum Zitat Ulyanov D, Vedaldi A, Lempitsky VS. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017; 2017. p. 4105–13. Ulyanov D, Vedaldi A, Lempitsky VS. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017; 2017. p. 4105–13.
18.
Zurück zum Zitat Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22Nd ACM SIGSAC conference on computer and communications security, CCS ’15, (New York, NY, USA); 2015. p. 1322–33, ACM. Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22Nd ACM SIGSAC conference on computer and communications security, CCS ’15, (New York, NY, USA); 2015. p. 1322–33, ACM.
19.
Zurück zum Zitat Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015—18th international conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III; 2015. p. 234–41. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015—18th international conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III; 2015. p. 234–41.
20.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ, editors. Advances in neural information processing systems 27. Curran Associates, Inc.; 2014. p. 2672–80. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ, editors. Advances in neural information processing systems 27. Curran Associates, Inc.; 2014. p. 2672–80.
22.
Zurück zum Zitat Isola P, Zhu J, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017; 2017. p. 5967–76. Isola P, Zhu J, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017; 2017. p. 5967–76.
23.
Zurück zum Zitat Zhu J, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE international conference on computer vision, ICCV 2017, Venice, Italy, October 22-29, 2017; 2017. p. 2242–51. Zhu J, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE international conference on computer vision, ICCV 2017, Venice, Italy, October 22-29, 2017; 2017. p. 2242–51.
24.
Zurück zum Zitat Welander P, Karlsson S, Eklund A. Generative adversarial networks for image-to-image translation on multi-contrast MR images—a comparison of CycleGAN and UNIT, 2018. arXiv:1806.07777. Welander P, Karlsson S, Eklund A. Generative adversarial networks for image-to-image translation on multi-contrast MR images—a comparison of CycleGAN and UNIT, 2018. arXiv:​1806.​07777.
25.
Zurück zum Zitat Liu M-Y, Breuel T, Kautz J. Unsupervised image-to-image translation networks. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red Hook: Curran Associates, Inc.; 2017. p. 700–8. Liu M-Y, Breuel T, Kautz J. Unsupervised image-to-image translation networks. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red Hook: Curran Associates, Inc.; 2017. p. 700–8.
26.
Zurück zum Zitat Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); June 2016. p. 2414–23. Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); June 2016. p. 2414–23.
27.
Zurück zum Zitat Zhu BO, Liu JZ, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92.CrossRefPubMed Zhu BO, Liu JZ, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92.CrossRefPubMed
28.
Zurück zum Zitat Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Improved training of Wasserstein GANs. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red Hook: Curran Associates, Inc.; 2017. p. 5767–77. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Improved training of Wasserstein GANs. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems 30. Red Hook: Curran Associates, Inc.; 2017. p. 5767–77.
29.
Zurück zum Zitat Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference track proceedings; 2018. Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference track proceedings; 2018.
30.
Zurück zum Zitat Miyato T, Kataoka T, Koyama M, Yoshida Y. Spectral normalization for generative adversarial networks. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference track proceedings; 2018. Miyato T, Kataoka T, Koyama M, Yoshida Y. Spectral normalization for generative adversarial networks. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference track proceedings; 2018.
32.
33.
Zurück zum Zitat Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK, Zhang Y, Sun L, Wang G. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans Med Imaging. 2018;37:1348–57.CrossRefPubMedPubMedCentral Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK, Zhang Y, Sun L, Wang G. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans Med Imaging. 2018;37:1348–57.CrossRefPubMedPubMedCentral
34.
Zurück zum Zitat You C, Yang Q, Shan H, Gjesteby L, Li G, Ju S, Zhang Z, Zhao Z, Zhang Y, Cong W, Wang G. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising. IEEE Access. 2018;6:41839–55.CrossRefPubMedPubMedCentral You C, Yang Q, Shan H, Gjesteby L, Li G, Ju S, Zhang Z, Zhao Z, Zhang Y, Cong W, Wang G. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising. IEEE Access. 2018;6:41839–55.CrossRefPubMedPubMedCentral
35.
36.
Zurück zum Zitat Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med Phys. 2019;46:550–62.CrossRefPubMed Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med Phys. 2019;46:550–62.CrossRefPubMed
37.
Zurück zum Zitat Timofte R, Smet VD, Gool LV. Anchored neighborhood regression for fast example-based super-resolution. In: 2013 IEEE International conference on computer vision, 2013; 1920–1927. Timofte R, Smet VD, Gool LV. Anchored neighborhood regression for fast example-based super-resolution. In: 2013 IEEE International conference on computer vision, 2013; 1920–1927.
38.
Zurück zum Zitat Yang J, Wright JN, Huang TS, Ma, Y. Image super-resolution as sparse representation of raw image patches. In: 2008 IEEE conference on computer vision and pattern recognition, 2008; 1–8. Yang J, Wright JN, Huang TS, Ma, Y. Image super-resolution as sparse representation of raw image patches. In: 2008 IEEE conference on computer vision and pattern recognition, 2008; 1–8.
39.
Zurück zum Zitat Bevilacqua M, Roumy A, Guillemot C, Alberi-Morel M-L. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC; 2012. Bevilacqua M, Roumy A, Guillemot C, Alberi-Morel M-L. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC; 2012.
40.
Zurück zum Zitat Chang H, Yeung D-Y, Xiong Y. Super-resolution through neighbor embedding. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004. CVPR 2004., 2004; 1:I–I. Chang H, Yeung D-Y, Xiong Y. Super-resolution through neighbor embedding. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004. CVPR 2004., 2004; 1:I–I.
41.
Zurück zum Zitat Kensuke Umehara TI, Ota Junko. Super-resolution imaging of mammograms based on the super-resolution convolutional neural network. Open J Med Imaging. 2017;7:180–95.CrossRef Kensuke Umehara TI, Ota Junko. Super-resolution imaging of mammograms based on the super-resolution convolutional neural network. Open J Med Imaging. 2017;7:180–95.CrossRef
42.
Zurück zum Zitat Umehara K, Ota J, Ishida T. Application of super-resolution convolutional neural network for enhancing image resolution in chest CT. J Digit Imaging. 2018;31(4):441–50.CrossRefPubMed Umehara K, Ota J, Ishida T. Application of super-resolution convolutional neural network for enhancing image resolution in chest CT. J Digit Imaging. 2018;31(4):441–50.CrossRefPubMed
43.
Zurück zum Zitat Plenge E, Poot DHJ, Bernsen M, Kotek G, Houston G, Wielopolski P, van der Weerd L, Niessen WJ, Meijering E. Super-resolution methods in MRI: can they improve the trade-off between resolution, signal-to-noise ratio, and acquisition time? Magn Reson Med. 2012;68:1983–93.CrossRefPubMed Plenge E, Poot DHJ, Bernsen M, Kotek G, Houston G, Wielopolski P, van der Weerd L, Niessen WJ, Meijering E. Super-resolution methods in MRI: can they improve the trade-off between resolution, signal-to-noise ratio, and acquisition time? Magn Reson Med. 2012;68:1983–93.CrossRefPubMed
44.
Zurück zum Zitat Ledig C, Theis L, Huszar F, Caballero J, Cunningham A, Acosta A, Aitken AP, Tejani A, Totz J, Wang Z, Shi W. Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. IEEE computer society; 2017. p. 105–14. Ledig C, Theis L, Huszar F, Caballero J, Cunningham A, Acosta A, Aitken AP, Tejani A, Totz J, Wang Z, Shi W. Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. IEEE computer society; 2017. p. 105–14.
45.
Zurück zum Zitat Sánchez I, Vilaplana V. Brain MRI super-resolution using generative adversarial networks. In: International conference on medical imaging with deep learning, (Amsterdam, The Netherlands); 2018. Sánchez I, Vilaplana V. Brain MRI super-resolution using generative adversarial networks. In: International conference on medical imaging with deep learning, (Amsterdam, The Netherlands); 2018.
46.
Zurück zum Zitat Chuquicusma MJM, Hussein S, Burt JR, Bagci U. How to fool radiologists with generative adversarial networks? a visual turing test for lung cancer diagnosis. 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), 2018; 240–244. Chuquicusma MJM, Hussein S, Burt JR, Bagci U. How to fool radiologists with generative adversarial networks? a visual turing test for lung cancer diagnosis. 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), 2018; 240–244.
47.
Zurück zum Zitat Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321–31.CrossRef Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321–31.CrossRef
48.
Zurück zum Zitat Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA. Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE-the international society for optical engineering, vol 10574; 2018. Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA. Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE-the international society for optical engineering, vol 10574; 2018.
49.
Zurück zum Zitat Madani A, Moradi M, Karargyris A, Syeda-Mahmood T. Chest x-ray generation and data augmentation for cardiovascular abnormality classification. In: Proceedings of SPIE 10574, Medical Imaging 2018: Image Processing, 105741M; 2018. Madani A, Moradi M, Karargyris A, Syeda-Mahmood T. Chest x-ray generation and data augmentation for cardiovascular abnormality classification. In: Proceedings of SPIE 10574, Medical Imaging 2018: Image Processing, 105741M; 2018.
50.
Zurück zum Zitat Korkinof D, Rijken T, O’Neill M, Yearsley J, Harvey H, Glocker B. High-resolution mammogram synthesis using progressive generative adversarial networks; 2018. arXiv:1807.03401. Korkinof D, Rijken T, O’Neill M, Yearsley J, Harvey H, Glocker B. High-resolution mammogram synthesis using progressive generative adversarial networks; 2018. arXiv:​1807.​03401.
51.
Zurück zum Zitat Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I. Deep MR to CT synthesis using unpaired data. In: Tsaftaris SA, Gooya A, Frangi AF, Prince JL, editors. Simulation and synthesis in medical imaging. Cham: Springer International Publishing; 2017. p. 14–23.CrossRef Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I. Deep MR to CT synthesis using unpaired data. In: Tsaftaris SA, Gooya A, Frangi AF, Prince JL, editors. Simulation and synthesis in medical imaging. Cham: Springer International Publishing; 2017. p. 14–23.CrossRef
52.
Zurück zum Zitat Hiasa Y, Otake Y, Takao M, Matsuoka T, Takashima K, Carass A, Prince J, Sugano N, Sato Y. Cross-modality image synthesis from unpaired data using cycleGAN: effects of gradient consistency loss and training data size. In: Goksel O, Oguz I, Gooya A, Burgos N, editors. Simulation and synthesis in medical imaging—third international workshop, SASHIMI 2018, held in conjunction with MICCAI 2018, proceedings, lecture notes in computer science, vol. 1. Berlin: Springer Verlag; 2018. p. 31–41 (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Hiasa Y, Otake Y, Takao M, Matsuoka T, Takashima K, Carass A, Prince J, Sugano N, Sato Y. Cross-modality image synthesis from unpaired data using cycleGAN: effects of gradient consistency loss and training data size. In: Goksel O, Oguz I, Gooya A, Burgos N, editors. Simulation and synthesis in medical imaging—third international workshop, SASHIMI 2018, held in conjunction with MICCAI 2018, proceedings, lecture notes in computer science, vol. 1. Berlin: Springer Verlag; 2018. p. 31–41 (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics).
53.
Zurück zum Zitat Zhang Z, Yang L, Zheng Y. Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, 2018; 9242–9251. Zhang Z, Yang L, Zheng Y. Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, 2018; 9242–9251.
54.
Zurück zum Zitat Wu E, Wu K, Cox D, Lotter W. Conditional infilling GANs for data augmentation in mammogram classification. In: Stoyanov D, et al., editors. Image analysis for moving organ, breast, and thoracic images. RAMBO 2018, BIA 2018, TIA 2018. Lecture notes in computer science, vol 11040. Cham: Springer; 2018. p. 98–106. Wu E, Wu K, Cox D, Lotter W. Conditional infilling GANs for data augmentation in mammogram classification. In: Stoyanov D, et al., editors. Image analysis for moving organ, breast, and thoracic images. RAMBO 2018, BIA 2018, TIA 2018. Lecture notes in computer science, vol 11040. Cham: Springer; 2018. p. 98–106.
55.
Zurück zum Zitat Mok TCW, Chung ACS. Learning data augmentation for brain tumor segmentation with coarse-to-fine generative adversarial networks. In: Crimi A, Bakas S, Kuijf H, Keyvan F, Reyes M, van Walsum T, editors. Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries. Cham: Springer International Publishing; 2019. p. 70–80.CrossRef Mok TCW, Chung ACS. Learning data augmentation for brain tumor segmentation with coarse-to-fine generative adversarial networks. In: Crimi A, Bakas S, Kuijf H, Keyvan F, Reyes M, van Walsum T, editors. Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries. Cham: Springer International Publishing; 2019. p. 70–80.CrossRef
56.
Zurück zum Zitat Frid-Adar M, Klang E, Amitai M, Goldberger J, Greenspan H. Synthetic data augmentation using GAN for improved liver lesion classification. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), 2018; 289–293. Frid-Adar M, Klang E, Amitai M, Goldberger J, Greenspan H. Synthetic data augmentation using GAN for improved liver lesion classification. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), 2018; 289–293.
57.
Zurück zum Zitat Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT image from MRI data using 3D fully convolutional networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, Papa JP, Nascimento JC, Loog M, Lu Z, Cardoso JS, Cornebise J, editors. Deep learning and data labeling for medical applications. Cham: Springer International Publishing; 2016. p. 170–8. Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT image from MRI data using 3D fully convolutional networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, Papa JP, Nascimento JC, Loog M, Lu Z, Cardoso JS, Cornebise J, editors. Deep learning and data labeling for medical applications. Cham: Springer International Publishing; 2016. p. 170–8.
58.
Zurück zum Zitat Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–19.CrossRefPubMed Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–19.CrossRefPubMed
59.
Zurück zum Zitat Kida S, Nakamoto T, Nakano M, Nawa K, Haga A, Kotoku J, Yamashita H, Nakagawa K. Cone beam computed tomography image quality improvement using a deep convolutional neural network. Cureus. 2018;10:e2548.PubMedPubMedCentral Kida S, Nakamoto T, Nakano M, Nawa K, Haga A, Kotoku J, Yamashita H, Nakagawa K. Cone beam computed tomography image quality improvement using a deep convolutional neural network. Cureus. 2018;10:e2548.PubMedPubMedCentral
60.
Zurück zum Zitat Ben-Cohen A, Klang E, Raskin SP, Soffer S, Ben-Haim S, Konen E, Amitai MM, Greenspan H. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Eng Appl Artif Intell. 2019;78:186–94.CrossRef Ben-Cohen A, Klang E, Raskin SP, Soffer S, Ben-Haim S, Konen E, Amitai MM, Greenspan H. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Eng Appl Artif Intell. 2019;78:186–94.CrossRef
61.
Zurück zum Zitat Kida S, Kaji S, Nawa K, Imae T, Nakamoto T, Ozaki S, Ohta T, Nozawa Y, Nakagawa K. Cone-beam CT to planning CT synthesis using generative adversarial networks; 2019. arXiv:1901.05773. Kida S, Kaji S, Nawa K, Imae T, Nakamoto T, Ozaki S, Ohta T, Nozawa Y, Nakagawa K. Cone-beam CT to planning CT synthesis using generative adversarial networks; 2019. arXiv:​1901.​05773.
62.
Zurück zum Zitat Rick Chang JH, Li C-L, Poczos B, Vijaya Kumar BVK, Sankaranarayanan AC. One network to solve them all–solving linear inverse problems using deep projection models. In: The IEEE international conference on computer vision (ICCV); Oct 2017. Rick Chang JH, Li C-L, Poczos B, Vijaya Kumar BVK, Sankaranarayanan AC. One network to solve them all–solving linear inverse problems using deep projection models. In: The IEEE international conference on computer vision (ICCV); Oct 2017.
63.
Zurück zum Zitat Ulyanov D, Vedaldi A, Lempitsky VS. Deep image prior. In: Proceedings of CVPR2018. IEEE Computer Society; 2018. p. 9446–54. Ulyanov D, Vedaldi A, Lempitsky VS. Deep image prior. In: Proceedings of CVPR2018. IEEE Computer Society; 2018. p. 9446–54.
64.
Zurück zum Zitat Adler J, Öktem O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 2017;33:124007.CrossRef Adler J, Öktem O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 2017;33:124007.CrossRef
65.
Zurück zum Zitat Lucas A, Iliadis M, Molina R, Katsaggelos AK. Using deep neural networks for inverse problems in imaging: Beyond analytical methods. IEEE Signal Process Mag. 2018;35:20–36.CrossRef Lucas A, Iliadis M, Molina R, Katsaggelos AK. Using deep neural networks for inverse problems in imaging: Beyond analytical methods. IEEE Signal Process Mag. 2018;35:20–36.CrossRef
66.
Zurück zum Zitat Tokui S, Oono K, Hido S, Clayton J. Chainer: a next-generation open source framework for deep learning. In: Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS); 2015. Tokui S, Oono K, Hido S, Clayton J. Chainer: a next-generation open source framework for deep learning. In: Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS); 2015.
68.
Zurück zum Zitat Tanno R, Worrall DE, Ghosh A, Kaden E, Sotiropoulos SN, Criminisi A, Alexander DC. Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In: Proceedings of Medical image computing and computer assisted intervention—MICCAI 2017, Quebec City, QC, Canada, September 11–13; 2017. p. 611–19. Tanno R, Worrall DE, Ghosh A, Kaden E, Sotiropoulos SN, Criminisi A, Alexander DC. Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In: Proceedings of Medical image computing and computer assisted intervention—MICCAI 2017, Quebec City, QC, Canada, September 11–13; 2017. p. 611–19.
Metadaten
Titel
Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging
verfasst von
Shizuo Kaji
Satoshi Kida
Publikationsdatum
20.06.2019
Verlag
Springer Singapore
Erschienen in
Radiological Physics and Technology / Ausgabe 3/2019
Print ISSN: 1865-0333
Elektronische ISSN: 1865-0341
DOI
https://doi.org/10.1007/s12194-019-00520-y

Weitere Artikel der Ausgabe 3/2019

Radiological Physics and Technology 3/2019 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.