Skip to main content
Erschienen in: European Journal of Nuclear Medicine and Molecular Imaging 13/2019

29.06.2019 | Review Article

Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning

verfasst von: Greg Zaharchuk

Erschienen in: European Journal of Nuclear Medicine and Molecular Imaging | Ausgabe 13/2019

Einloggen, um Zugang zu erhalten

Abstract

Introduction

Recently there have been significant advances in the field of machine learning and artificial intelligence (AI) centered around imaging-based applications such as computer vision. In particular, the tremendous power of deep learning algorithms, primarily based on convolutional neural network strategies, is becoming increasingly apparent and has already had direct impact on the fields of radiology and nuclear medicine. While most early applications of computer vision to radiological imaging have focused on classification of images into disease categories, it is also possible to use these methods to improve image quality. Hybrid imaging approaches, such as PET/MRI and PET/CT, are ideal for applying these methods.

Methods

This review will give an overview of the application of AI to improve image quality for PET imaging directly and how the additional use of anatomic information from CT and MRI can lead to further benefits. For PET, these performance gains can be used to shorten imaging scan times, with improvement in patient comfort and motion artifacts, or to push towards lower radiotracer doses. It also opens the possibilities for dual tracer studies, more frequent follow-up examinations, and new imaging indications. How to assess quality and the potential effects of bias in training and testing sets will be discussed.

Conclusion

Harnessing the power of these new technologies to extract maximal information from hybrid PET imaging will open up new vistas for both research and clinical applications with associated benefits in patient care.
Literatur
1.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Lake Tahoe, Nevada. December 03–06, 2012; 1097–1105. Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Lake Tahoe, Nevada. December 03–06, 2012; 1097–1105.
2.
Zurück zum Zitat Hinton G. Deep learning - a technology with the potential to transform health care. JAMA. 2018;320:1101–2.CrossRef Hinton G. Deep learning - a technology with the potential to transform health care. JAMA. 2018;320:1101–2.CrossRef
3.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.CrossRef LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.CrossRef
4.
Zurück zum Zitat Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018;53:309–13.CrossRef Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018;53:309–13.CrossRef
5.
Zurück zum Zitat Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.CrossRef Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.CrossRef
6.
Zurück zum Zitat Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology. 2018;287:313–22.CrossRef Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology. 2018;287:313–22.CrossRef
7.
Zurück zum Zitat Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 2018;15:e1002686.CrossRef Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 2018;15:e1002686.CrossRef
9.
Zurück zum Zitat Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol. 2018;39:1776–84.CrossRef Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol. 2018;39:1776–84.CrossRef
10.
Zurück zum Zitat Fan A, Jahanian H, Holdsworth S, Zaharchuk G. Comparison of cerebral blood flow measurement with [15O]-water PET and arterial spin labeling MRI: a systematic review. J Cereb Blood Flow Metab. 2016;36:842–61.CrossRef Fan A, Jahanian H, Holdsworth S, Zaharchuk G. Comparison of cerebral blood flow measurement with [15O]-water PET and arterial spin labeling MRI: a systematic review. J Cereb Blood Flow Metab. 2016;36:842–61.CrossRef
11.
Zurück zum Zitat Chen H, Zhang Y, Kalra M, Lin F, Chen Y, Liao P, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN). ArXiv e-prints; 2017. p. 1702. Chen H, Zhang Y, Kalra M, Lin F, Chen Y, Liao P, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN). ArXiv e-prints; 2017. p. 1702.
12.
Zurück zum Zitat Ronnenberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Proceedings of MICCAI, vol. 9351; 2015. Ronnenberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Proceedings of MICCAI, vol. 9351; 2015.
13.
Zurück zum Zitat Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92.CrossRef Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92.CrossRef
14.
Zurück zum Zitat Haggstrom I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: a deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal. 2019;54:253–62.CrossRef Haggstrom I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: a deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal. 2019;54:253–62.CrossRef
15.
Zurück zum Zitat Mardani M, Gong E, Cheng J, Vasanawala S, Zaharchuk G, Xing L, et al. Deep generative adversarial networks for compressed sensing MRI. IEEE Trans Med Imaging. 2019;38:167–79.CrossRef Mardani M, Gong E, Cheng J, Vasanawala S, Zaharchuk G, Xing L, et al. Deep generative adversarial networks for compressed sensing MRI. IEEE Trans Med Imaging. 2019;38:167–79.CrossRef
16.
Zurück zum Zitat Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79:3055–71.CrossRef Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79:3055–71.CrossRef
17.
Zurück zum Zitat Wagenknecht G, Kaiser HJ, Mottaghy FM, Herzog H. MRI for attenuation correction in PET: methods and challenges. MAGMA. 2013;26:99–113.CrossRef Wagenknecht G, Kaiser HJ, Mottaghy FM, Herzog H. MRI for attenuation correction in PET: methods and challenges. MAGMA. 2013;26:99–113.CrossRef
18.
Zurück zum Zitat Su Y, Rubin B, McConathy J, Laforest R, Qi J, Sharma A, et al. Impact of MR-based attenuation correction on neurologic PET studies. J Nucl Med. 2016;57:913–7.CrossRef Su Y, Rubin B, McConathy J, Laforest R, Qi J, Sharma A, et al. Impact of MR-based attenuation correction on neurologic PET studies. J Nucl Med. 2016;57:913–7.CrossRef
19.
Zurück zum Zitat Wiesinger F, Bylund M, Yang J, Kaushik S, Shanbhag D, Ahn S, et al. Zero TE-based pseudo-CT image conversion in the head and its application in PET/MR attenuation correction and MR-guided radiation therapy planning. Magn Reson Med. 2018;80:1440–51.CrossRef Wiesinger F, Bylund M, Yang J, Kaushik S, Shanbhag D, Ahn S, et al. Zero TE-based pseudo-CT image conversion in the head and its application in PET/MR attenuation correction and MR-guided radiation therapy planning. Magn Reson Med. 2018;80:1440–51.CrossRef
20.
Zurück zum Zitat Leynes AP, Yang J, Shanbhag DD, Kaushik SS, Seo Y, Hope TA, et al. Hybrid ZTE/Dixon MR-based attenuation correction for quantitative uptake estimation of pelvic lesions in PET/MRI. Med Phys. 2017;44:902–13.CrossRef Leynes AP, Yang J, Shanbhag DD, Kaushik SS, Seo Y, Hope TA, et al. Hybrid ZTE/Dixon MR-based attenuation correction for quantitative uptake estimation of pelvic lesions in PET/MRI. Med Phys. 2017;44:902–13.CrossRef
21.
Zurück zum Zitat Ladefoged CN, Benoit D, Law I, Holm S, Kjaer A, Hojgaard L, et al. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging. Phys Med Biol. 2015;60:8047–65.CrossRef Ladefoged CN, Benoit D, Law I, Holm S, Kjaer A, Hojgaard L, et al. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging. Phys Med Biol. 2015;60:8047–65.CrossRef
22.
Zurück zum Zitat Ladefoged CN, Law I, Anazodo U, St Lawrence K, Izquierdo-Garcia D, Catana C, et al. A multi-Centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients. Neuroimage. 2017;147:346–59.CrossRef Ladefoged CN, Law I, Anazodo U, St Lawrence K, Izquierdo-Garcia D, Catana C, et al. A multi-Centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients. Neuroimage. 2017;147:346–59.CrossRef
23.
Zurück zum Zitat Bradshaw TJ, Zhao G, Jang H, Liu F, McMillan AB. Feasibility of deep learning-based PET/MR attenuation correction in the pelvis using only diagnostic MR images. Tomography. 2018;4:138–47.CrossRef Bradshaw TJ, Zhao G, Jang H, Liu F, McMillan AB. Feasibility of deep learning-based PET/MR attenuation correction in the pelvis using only diagnostic MR images. Tomography. 2018;4:138–47.CrossRef
24.
Zurück zum Zitat Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2018;286:676–84.CrossRef Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2018;286:676–84.CrossRef
25.
Zurück zum Zitat Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, et al. Zero-echo-time and Dixon deep pseudo-CT (ZEDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–8.CrossRef Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, et al. Zero-echo-time and Dixon deep pseudo-CT (ZEDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–8.CrossRef
26.
Zurück zum Zitat Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol. 2018;63:125011.CrossRef Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol. 2018;63:125011.CrossRef
27.
Zurück zum Zitat Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–19.CrossRef Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–19.CrossRef
28.
Zurück zum Zitat Liu F, Jang H, Kijowski R, Zhao G, Bradshaw T, McMillan AB. A deep learning approach for (18)F-FDG PET attenuation correction. EJNMMI Phys. 2018;5:24.CrossRef Liu F, Jang H, Kijowski R, Zhao G, Bradshaw T, McMillan AB. A deep learning approach for (18)F-FDG PET attenuation correction. EJNMMI Phys. 2018;5:24.CrossRef
29.
Zurück zum Zitat Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing. 2017;267:406–16.CrossRef Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing. 2017;267:406–16.CrossRef
30.
Zurück zum Zitat Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. ArXiV. 2017:1712.04119. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. ArXiV. 2017:1712.04119.
31.
Zurück zum Zitat Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra-low-dose (18)F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290:649–56.CrossRef Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra-low-dose (18)F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290:649–56.CrossRef
32.
Zurück zum Zitat Ouyang J, Chen K, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature mapping and task-specific perceptual loss. Med Phys. 2019 (in press). Ouyang J, Chen K, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature mapping and task-specific perceptual loss. Med Phys. 2019 (in press).
33.
Zurück zum Zitat Guo J, Gong E, Fan A, Khalighi M, Zaharchuk G. Improving ASL CBF quantification using multi-contrast MRI and deep learning. In: Proceedings of American Society of Functional Neuroradiology; 2017. Guo J, Gong E, Fan A, Khalighi M, Zaharchuk G. Improving ASL CBF quantification using multi-contrast MRI and deep learning. In: Proceedings of American Society of Functional Neuroradiology; 2017.
34.
Zurück zum Zitat Guo J, Gong E, Goubran M, Fan A, Khalighi M, Zaharchuk G. Improving perfusion image quality and quantification accuracy using multi-contrast MRI and deep convolutional neural networks. In: Proceedings of ISMRM; 2018. p. 310. Guo J, Gong E, Goubran M, Fan A, Khalighi M, Zaharchuk G. Improving perfusion image quality and quantification accuracy using multi-contrast MRI and deep convolutional neural networks. In: Proceedings of ISMRM; 2018. p. 310.
35.
Zurück zum Zitat Gong E, Chen K, Guo J, Fan A, Pauly J, Zaharchuk G. Multi-tracer metabolic mapping from contrast-free MRI using deep learning. In: Proceedings of ISMRM workshop on machine learning; 2018. Gong E, Chen K, Guo J, Fan A, Pauly J, Zaharchuk G. Multi-tracer metabolic mapping from contrast-free MRI using deep learning. In: Proceedings of ISMRM workshop on machine learning; 2018.
36.
Zurück zum Zitat Wei W, Poirion E, Bodini B, Durrleman S, Ayache N, Stankoff B, et al. Learning myelin content in multiple sclerosis from multimodal MRI through adversarial training. In: Proceedings of MICCAI; 2018. p. 514–22. Wei W, Poirion E, Bodini B, Durrleman S, Ayache N, Stankoff B, et al. Learning myelin content in multiple sclerosis from multimodal MRI through adversarial training. In: Proceedings of MICCAI; 2018. p. 514–22.
Metadaten
Titel
Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning
verfasst von
Greg Zaharchuk
Publikationsdatum
29.06.2019
Verlag
Springer Berlin Heidelberg
Erschienen in
European Journal of Nuclear Medicine and Molecular Imaging / Ausgabe 13/2019
Print ISSN: 1619-7070
Elektronische ISSN: 1619-7089
DOI
https://doi.org/10.1007/s00259-019-04374-9

Weitere Artikel der Ausgabe 13/2019

European Journal of Nuclear Medicine and Molecular Imaging 13/2019 Zur Ausgabe