Skip to main content

11.09.2024 | Original Paper

A Novel Network for Low-Dose CT Denoising Based on Dual-Branch Structure and Multi-Scale Residual Attention

verfasst von: Ju Zhang, Lieli Ye, Weiwei Gong, Mingyang Chen, Guangyu Liu, Yun Cheng

Erschienen in: Journal of Imaging Informatics in Medicine

Einloggen, um Zugang zu erhalten

Abstract

Deep learning-based denoising of low-dose medical CT images has received great attention both from academic researchers and physicians in recent years, and has shown important application value in clinical practice. In this work, a novel two-branch and multi-scale residual attention-based network for low-dose CT image denoising is proposed. It adopts a two-branch framework structure, to extract and fuse image features at shallow and deep levels respectively, to recover image texture and structure information as much as possible. We propose the adaptive dynamic convolution block (ADCB) in the local information extraction layer. It can effectively extract the detailed information of low-dose CT denoising and enables the network to better capture the local details and texture features of the image, thereby improving the denoising effect and image quality. Multi-scale edge enhancement attention block (MEAB) is proposed in the global information extraction layer, to perform feature fusion through dilated convolution and a multi-dimensional attention mechanism. A multi-scale residual convolution block (MRCB) is proposed to integrate feature information and improve the robustness and generalization of the network. To demonstrate the effectiveness of our method, extensive comparison experiments are conducted and the performances evaluated on two publicly available datasets. Our model achieves 29.3004 PSNR, 0.8659 SSIM, and 14.0284 RMSE on the AAPM-Mayo dataset. It is evaluated by adding four different noise levels σ = 15, 30, 45, and 60 on the Qin_LUNG_CT dataset and achieves the best results. Ablation studies show that the proposed ADCB, MEAB, and MRCB modules improve the denoising performances significantly. The source code is available at https://​github.​com/​Ye111-cmd/​LDMANet.
Literatur
1.
Zurück zum Zitat Brenner D J, Hall E J. Computed tomography—an increasing source of radiation exposure[J]. New England journal of medicine, 2007, 357(22): 2277-2284.CrossRefPubMed Brenner D J, Hall E J. Computed tomography—an increasing source of radiation exposure[J]. New England journal of medicine, 2007, 357(22): 2277-2284.CrossRefPubMed
2.
Zurück zum Zitat Baumann B M, Chen E H, Mills A M, et al. Patient perceptions of computed tomographic imaging and their understanding of radiation risk and exposure[J]. Annals of Emergency Medicine, 2011, 58(1): 1-7. e2.CrossRefPubMed Baumann B M, Chen E H, Mills A M, et al. Patient perceptions of computed tomographic imaging and their understanding of radiation risk and exposure[J]. Annals of Emergency Medicine, 2011, 58(1): 1-7. e2.CrossRefPubMed
3.
Zurück zum Zitat Manduca A, Yu L, Trzasko J D, et al. Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT[J]. Medical physics, 2009, 36(11): 4911-4919.CrossRefPubMedPubMedCentral Manduca A, Yu L, Trzasko J D, et al. Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT[J]. Medical physics, 2009, 36(11): 4911-4919.CrossRefPubMedPubMedCentral
4.
Zurück zum Zitat Balda M, Hornegger J, Heismann B. Ray contribution masks for structure adaptive sinogram filtering[J]. IEEE transactions on medical imaging, 2012, 31(6): 1228-1239.CrossRefPubMed Balda M, Hornegger J, Heismann B. Ray contribution masks for structure adaptive sinogram filtering[J]. IEEE transactions on medical imaging, 2012, 31(6): 1228-1239.CrossRefPubMed
5.
Zurück zum Zitat Liu Y, Ma J, Fan Y, et al. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction[J]. Physics in Medicine & Biology, 2012, 57(23): 7923.CrossRef Liu Y, Ma J, Fan Y, et al. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction[J]. Physics in Medicine & Biology, 2012, 57(23): 7923.CrossRef
6.
Zurück zum Zitat Ma J, Zhang H, Gao Y, et al. Iterative image reconstruction for cerebral perfusion CT using a pre-contrast scan induced edge-preserving prior[J]. Physics in Medicine & Biology, 2012, 57(22): 7519.CrossRef Ma J, Zhang H, Gao Y, et al. Iterative image reconstruction for cerebral perfusion CT using a pre-contrast scan induced edge-preserving prior[J]. Physics in Medicine & Biology, 2012, 57(22): 7519.CrossRef
7.
Zurück zum Zitat Xu Q, Yu H, Mou X, et al. Low-dose X-ray CT reconstruction via dictionary learning[J]. IEEE transactions on medical imaging, 2012, 31(9): 1682-1697.CrossRefPubMedPubMedCentral Xu Q, Yu H, Mou X, et al. Low-dose X-ray CT reconstruction via dictionary learning[J]. IEEE transactions on medical imaging, 2012, 31(9): 1682-1697.CrossRefPubMedPubMedCentral
8.
Zurück zum Zitat Zhang Y, Mou X, Wang G, et al. Tensor-based dictionary learning for spectral CT reconstruction[J]. IEEE transactions on medical imaging, 2016, 36(1): 142-154.CrossRefPubMedPubMedCentral Zhang Y, Mou X, Wang G, et al. Tensor-based dictionary learning for spectral CT reconstruction[J]. IEEE transactions on medical imaging, 2016, 36(1): 142-154.CrossRefPubMedPubMedCentral
9.
Zurück zum Zitat Zhang K, Zuo W, Chen Y, et al. Beyond a Gaussian denoiser: Residual learning of deep cnn for image denoising[J]. IEEE transactions on image processing, 2017, 26(7): 3142-3155.CrossRefPubMed Zhang K, Zuo W, Chen Y, et al. Beyond a Gaussian denoiser: Residual learning of deep cnn for image denoising[J]. IEEE transactions on image processing, 2017, 26(7): 3142-3155.CrossRefPubMed
10.
Zurück zum Zitat Guo S, Yan Z, Zhang K, et al. Toward convolutional blind denoising of real photographs[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 1712–1722. Guo S, Yan Z, Zhang K, et al. Toward convolutional blind denoising of real photographs[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 1712–1722.
11.
Zurück zum Zitat Zhang K, Zuo W, Zhang L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising[J]. IEEE Transactions on Image Processing, 2018, 27(9): 4608-4622.CrossRef Zhang K, Zuo W, Zhang L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising[J]. IEEE Transactions on Image Processing, 2018, 27(9): 4608-4622.CrossRef
12.
Zurück zum Zitat Anwar S, Barnes N. Real image denoising with feature attention[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 3155–3164. Anwar S, Barnes N. Real image denoising with feature attention[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 3155–3164.
13.
Zurück zum Zitat Vaswani A, Shazier N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30. Vaswani A, Shazier N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.
14.
Zurück zum Zitat Dosovitskiy, Alexey , et al. "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale." International Conference on Learning Representations 2021. Dosovitskiy, Alexey , et al. "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale." International Conference on Learning Representations 2021.
15.
Zurück zum Zitat Yuan L, Chen Y, Wang T, et al. Tokens-to-token vit: Training vision transformers from scratch on imagenet[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 558–567. Yuan L, Chen Y, Wang T, et al. Tokens-to-token vit: Training vision transformers from scratch on imagenet[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 558–567.
16.
Zurück zum Zitat Tu Z, Talebi H, Zhang H, et al. Maxvit: Multi-axis vision transformer[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 459–479. Tu Z, Talebi H, Zhang H, et al. Maxvit: Multi-axis vision transformer[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 459–479.
17.
Zurück zum Zitat Pan J, Liu S, Sun D, et al. Learning dual convolutional neural networks for low-level vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3070–3079. Pan J, Liu S, Sun D, et al. Learning dual convolutional neural networks for low-level vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3070–3079.
18.
Zurück zum Zitat Tian C, Xu Y, Zuo W. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461-473.CrossRefPubMed Tian C, Xu Y, Zuo W. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461-473.CrossRefPubMed
19.
Zurück zum Zitat Wu W, Liu S, Xia Y, et al. Dual residual attention network for image denoising[J]. Pattern Recognition, 2024, 149: 110291.CrossRef Wu W, Liu S, Xia Y, et al. Dual residual attention network for image denoising[J]. Pattern Recognition, 2024, 149: 110291.CrossRef
20.
Zurück zum Zitat Chen H, Zhang Y, Kalra M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE transactions on medical imaging, 2017, 36(12): 2524-2535.CrossRefPubMedPubMedCentral Chen H, Zhang Y, Kalra M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE transactions on medical imaging, 2017, 36(12): 2524-2535.CrossRefPubMedPubMedCentral
21.
Zurück zum Zitat Yang Q, Yan P, Zhang Y, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss[J]. IEEE transactions on medical imaging, 2018, 37(6): 1348-1357.CrossRefPubMedPubMedCentral Yang Q, Yan P, Zhang Y, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss[J]. IEEE transactions on medical imaging, 2018, 37(6): 1348-1357.CrossRefPubMedPubMedCentral
22.
Zurück zum Zitat Yun S, Choi J, Yoo Y, et al. Action-decision networks for visual tracking with deep reinforcement learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2711–2720. Yun S, Choi J, Yoo Y, et al. Action-decision networks for visual tracking with deep reinforcement learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2711–2720.
23.
Zurück zum Zitat Huang Z, Zhang J, Zhang Y, et al. DU-GAN: Generative adversarial networks with dual-domain U-Net-based discriminators for low-dose CT denoising[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 71: 1-12. Huang Z, Zhang J, Zhang Y, et al. DU-GAN: Generative adversarial networks with dual-domain U-Net-based discriminators for low-dose CT denoising[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 71: 1-12.
24.
Zurück zum Zitat Zhang Z, Yu L, Liang X, et al. TransCT: dual-path transformer for low dose computed tomography[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VI 24. Springer International Publishing, 2021: 55-64 Zhang Z, Yu L, Liang X, et al. TransCT: dual-path transformer for low dose computed tomography[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VI 24. Springer International Publishing, 2021: 55-64
25.
Zurück zum Zitat Yang L, Li Z, Ge R,et al. Low-Dose CT Denoising via Sinogram Inner-Structure Transformer[J].IEEE Transactions on Medical Imaging, 2023. Yang L, Li Z, Ge R,et al. Low-Dose CT Denoising via Sinogram Inner-Structure Transformer[J].IEEE Transactions on Medical Imaging, 2023.
26.
Zurück zum Zitat Zhu L, Han Y, Xi X, et al. STEDNet: Swin transformer‐based encoder-decoder network for noise reduction in low‐dose CT[J]. Medical Physics, 2023. Zhu L, Han Y, Xi X, et al. STEDNet: Swin transformer‐based encoder-decoder network for noise reduction in low‐dose CT[J]. Medical Physics, 2023.
27.
Zurück zum Zitat Wu Z, Zhong X, Lyv T, et al. Deep Dual-domain United Guiding Learning with Global-Local Transformer-Convolution U-Net for LDCT Reconstruction[J]. IEEE Transactions on Instrumentation and Measurement, 2023. Wu Z, Zhong X, Lyv T, et al. Deep Dual-domain United Guiding Learning with Global-Local Transformer-Convolution U-Net for LDCT Reconstruction[J]. IEEE Transactions on Instrumentation and Measurement, 2023.
28.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818–2826. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818–2826.
29.
Zurück zum Zitat Luo Z, Li J, Zhu Y. A deep feature fusion network based on multiple attention mechanisms for joint iris-periocular biometric recognition[J]. IEEE Signal Processing Letters, 2021, 28: 1060-1064.CrossRef Luo Z, Li J, Zhu Y. A deep feature fusion network based on multiple attention mechanisms for joint iris-periocular biometric recognition[J]. IEEE Signal Processing Letters, 2021, 28: 1060-1064.CrossRef
30.
Zurück zum Zitat Wu W, Lv G, Duan Y, et al. DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising[J]. arXiv preprint arXiv:2304.01498, 2023. Wu W, Lv G, Duan Y, et al. DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising[J]. arXiv preprint arXiv:​2304.​01498, 2023.
31.
Zurück zum Zitat Zhong J, Chen J, Mian A. DualConv: Dual convolutional kernels for lightweight deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022. Zhong J, Chen J, Mian A. DualConv: Dual convolutional kernels for lightweight deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022.
32.
Zurück zum Zitat Liu S, Lei Y, Zhang L, et al. MRDDANet: A multiscale residual dense dual attention network for SAR image denoising[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 1-13. Liu S, Lei Y, Zhang L, et al. MRDDANet: A multiscale residual dense dual attention network for SAR image denoising[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 1-13.
33.
Zurück zum Zitat Zhou Z, Siddiquee M M R, Tajbakhsh N, et al. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE transactions on medical imaging, 2019, 39(6): 1856-1867.CrossRefPubMedPubMedCentral Zhou Z, Siddiquee M M R, Tajbakhsh N, et al. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE transactions on medical imaging, 2019, 39(6): 1856-1867.CrossRefPubMedPubMedCentral
34.
Zurück zum Zitat Pan H, Gao F, Dong J, et al. Multiscale adaptive fusion network for hyperspectral image denoising[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 3045-3059.CrossRef Pan H, Gao F, Dong J, et al. Multiscale adaptive fusion network for hyperspectral image denoising[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 3045-3059.CrossRef
35.
Zurück zum Zitat Feng R, Gao Y, Tse T H E, et al. DiffPose: SpatioTemporal diffusion model for video-based human pose estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 14861–14872. Feng R, Gao Y, Tse T H E, et al. DiffPose: SpatioTemporal diffusion model for video-based human pose estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 14861–14872.
36.
Zurück zum Zitat Iqbal S, Khan T M, Naqvi S S, et al. LDMRes-Net: A Lightweight Neural Network for Efficient Medical Image Segmentation on IoT and Edge Devices[J]. IEEE journal of biomedical and health informatics, 2023. Iqbal S, Khan T M, Naqvi S S, et al. LDMRes-Net: A Lightweight Neural Network for Efficient Medical Image Segmentation on IoT and Edge Devices[J]. IEEE journal of biomedical and health informatics, 2023.
37.
Zurück zum Zitat Li J, Wen Y, He L. SCConv: Spatial and Channel Reconstruction Convolution for Feature Redundancy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 6153–6162. Li J, Wen Y, He L. SCConv: Spatial and Channel Reconstruction Convolution for Feature Redundancy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 6153–6162.
38.
Zurück zum Zitat Liang T, Jin Y, Li Y, et al. Edcnn: Edge enhancement-based densely connected network with compound loss for low-dose ct denoising[C]//2020 15th IEEE International Conference on Signal Processing (ICSP). IEEE, 2020, 1: 193–198. Liang T, Jin Y, Li Y, et al. Edcnn: Edge enhancement-based densely connected network with compound loss for low-dose ct denoising[C]//2020 15th IEEE International Conference on Signal Processing (ICSP). IEEE, 2020, 1: 193–198.
39.
Zurück zum Zitat Tian C, Xu Y, Zuo W, et al. Designing and training of a dual CNN for image denoising[J]. Knowledge-Based Systems, 2021, 226: 106949.CrossRef Tian C, Xu Y, Zuo W, et al. Designing and training of a dual CNN for image denoising[J]. Knowledge-Based Systems, 2021, 226: 106949.CrossRef
40.
Zurück zum Zitat Wang Z, Cun X, Bao J, et al. Uformer: A general u-shaped transformer for image restoration[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 17683–17693. Wang Z, Cun X, Bao J, et al. Uformer: A general u-shaped transformer for image restoration[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 17683–17693.
41.
Zurück zum Zitat Wang D, Fan F, Wu Z, et al. CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising[J]. Physics in Medicine & Biology, 2023, 68(6): 065012.CrossRef Wang D, Fan F, Wu Z, et al. CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising[J]. Physics in Medicine & Biology, 2023, 68(6): 065012.CrossRef
42.
Zurück zum Zitat Zhang K, Li Y, Liang J, et al. Practical blind image denoising via Swin-Conv-UNet and data synthesis[J]. Machine Intelligence Research, 2023, 20(6): 822-836.CrossRef Zhang K, Li Y, Liang J, et al. Practical blind image denoising via Swin-Conv-UNet and data synthesis[J]. Machine Intelligence Research, 2023, 20(6): 822-836.CrossRef
43.
Zurück zum Zitat W. Lai, J. Huang, N. Ahuja, M. Yang, Deep laplacian pyramid networks for fast and accurate super-resolution, IEEE Conference on Computer Vision and Pattern Recognition (2017) 5835–5843. W. Lai, J. Huang, N. Ahuja, M. Yang, Deep laplacian pyramid networks for fast and accurate super-resolution, IEEE Conference on Computer Vision and Pattern Recognition (2017) 5835–5843.
Metadaten
Titel
A Novel Network for Low-Dose CT Denoising Based on Dual-Branch Structure and Multi-Scale Residual Attention
verfasst von
Ju Zhang
Lieli Ye
Weiwei Gong
Mingyang Chen
Guangyu Liu
Yun Cheng
Publikationsdatum
11.09.2024
Verlag
Springer International Publishing
Erschienen in
Journal of Imaging Informatics in Medicine
Print ISSN: 2948-2925
Elektronische ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-024-01254-z

Neu im Fachgebiet Radiologie

KI-gestütztes Mammografiescreening überzeugt im Praxistest

Mit dem Einsatz künstlicher Intelligenz lässt sich die Detektionsrate im Mammografiescreening offenbar deutlich steigern. Mehr unnötige Zusatzuntersuchungen sind laut der Studie aus Deutschland nicht zu befürchten.

Stumme Schlaganfälle − ein häufiger Nebenbefund im Kopf-CT?

In 4% der in der Notfallambulanz initiierten zerebralen Bildgebung sind „alte“ Schlaganfälle zu erkennen. Gar nicht so selten handelt es sich laut einer aktuellen Studie dabei um unbemerkte Insulte. Bietet sich hier womöglich die Chance auf ein effektives opportunistisches Screening?

Die elektronische Patientenakte kommt: Das sollten Sie jetzt wissen

Am 15. Januar geht die „ePA für alle“ zunächst in den Modellregionen an den Start. Doch schon bald soll sie in allen Praxen zum Einsatz kommen. Was ist jetzt zu tun? Was müssen Sie wissen? Wir geben in einem FAQ Antworten auf 21 Fragen.

Stören weiße Wände und viel Licht die Bildqualitätskontrolle?

Wenn es darum geht, die technische Qualität eines Mammogramms zu beurteilen, könnten graue Wandfarbe und reduzierte Beleuchtung im Bildgebungsraum von Vorteil sein. Darauf deuten zumindest Ergebnisse einer kleinen Studie hin. 

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.