Skip to main content
Erschienen in:

07.02.2022

Lesion Segmentation in Gastroscopic Images Using Generative Adversarial Networks

verfasst von: Yaru Sun, Yunqi Li, Pengfei Wang, Dongzhi He, Zhiqiang Wang

Erschienen in: Journal of Imaging Informatics in Medicine | Ausgabe 3/2022

Einloggen, um Zugang zu erhalten

Abstract

The segmentation of the lesion region in gastroscopic images is highly important for the detection and treatment of early gastric cancer. This paper proposes a novel approach for gastric lesion segmentation by using generative adversarial training. First, a segmentation network is designed to generate accurate segmentation masks for gastric lesions. The proposed segmentation network adds residual blocks to the encoding and decoding path of U-Net. The cascaded dilated convolution is also added at the bottleneck of U-Net. The residual connection promotes information propagation, while dilated convolution integrates multi-scale context information. Meanwhile, a discriminator is used to distinguish the generated and real segmentation masks. The proposed discriminator is a Markov discriminator (Patch-GAN), which discriminates each \(\mathrm{N}\times \mathrm{N}\) matrix in the image. In the process of network training, the adversary training mechanism is used to iteratively optimize the generator and the discriminator until they converge at the same time. The experimental results show that the dice, accuracy, and recall are 86.6%, 91.9%, and 87.3%, respectively. These metrics are significantly better than the existing models, which proves the effectiveness of this method and can meet the needs of clinical diagnosis and treatment.
Literatur
2.
Zurück zum Zitat Cheng J, Xi W, Yang A, Jiang Q, and Fang W: Model to identify early-stage gastric cancers with a deep invasion of submucosa based on endoscopy and endoscopic ultrasonography findings. J Surgical Endoscopy 32(2), 2018 Cheng J, Xi W, Yang A, Jiang Q, and Fang W: Model to identify early-stage gastric cancers with a deep invasion of submucosa based on endoscopy and endoscopic ultrasonography findings. J Surgical Endoscopy 32(2), 2018
3.
Zurück zum Zitat Cui ZH, Zhang QY, Zhao L, Sun X, and Lei Y: Application of intelligent target detection technology based on a gastroscopic image in early gastric cancer screening. J China Digital Medicine 16 (02):7-1, 2021 Cui ZH, Zhang QY, Zhao L, Sun X, and Lei Y: Application of intelligent target detection technology based on a gastroscopic image in early gastric cancer screening. J China Digital Medicine 16 (02):7-1, 2021
4.
Zurück zum Zitat Sainju S, Bui FM, Wahid KA: Automated bleeding detection in capsule endoscopy videos using statistical features and region growing. J Journal of medical systems 38(4): 1-11, 2014 Sainju S, Bui FM, Wahid KA: Automated bleeding detection in capsule endoscopy videos using statistical features and region growing. J Journal of medical systems 38(4): 1-11, 2014
5.
Zurück zum Zitat Yeh JY, Wu TH, Tsai WJ: Bleeding and ulcer detection using wireless capsule endoscopy images. J Journal of Software Engineering and Applications 7(5): 422, 2014CrossRef Yeh JY, Wu TH, Tsai WJ: Bleeding and ulcer detection using wireless capsule endoscopy images. J Journal of Software Engineering and Applications 7(5): 422, 2014CrossRef
6.
Zurück zum Zitat Li Y, He Z, Ye X, and Han K: Spatial-temporal graph convolutional networks for skeleton-based dynamic hand gesture recognition. J EURASIP Journal on Image and Video Processing 2019(1): 1-7, 2019CrossRef Li Y, He Z, Ye X, and Han K: Spatial-temporal graph convolutional networks for skeleton-based dynamic hand gesture recognition. J EURASIP Journal on Image and Video Processing 2019(1): 1-7, 2019CrossRef
7.
Zurück zum Zitat Li B, Meng MQH: Computer-aided detection of bleeding regions for capsule endoscopy images. J IEEE Transactions on biomedical engineering 56(4): 1032-1039, 2009CrossRef Li B, Meng MQH: Computer-aided detection of bleeding regions for capsule endoscopy images. J IEEE Transactions on biomedical engineering 56(4): 1032-1039, 2009CrossRef
8.
Zurück zum Zitat Sakai Y, Takemoto S, Hori K: Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 4138–4141, 2018 Sakai Y, Takemoto S, Hori K: Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 4138–4141, 2018
9.
Zurück zum Zitat Yoon HJ, Kim S, Kim JH, Keum JS, and Noh SH: A lesion-based convolutional neural network improves endoscopic detection and depth prediction of early gastric cancer. J Journal of clinical medicine 8(9): 1310, 2019CrossRef Yoon HJ, Kim S, Kim JH, Keum JS, and Noh SH: A lesion-based convolutional neural network improves endoscopic detection and depth prediction of early gastric cancer. J Journal of clinical medicine 8(9): 1310, 2019CrossRef
10.
Zurück zum Zitat Simonyan K, Zisserman A: Very Deep Convolutional Networks for Large-Scale Image Recognition. J Computer Science, 2014 Simonyan K, Zisserman A: Very Deep Convolutional Networks for Large-Scale Image Recognition. J Computer Science, 2014
11.
Zurück zum Zitat Shibata T, Teramoto A, Yamada H, Ohmiya N, and Fujita H: Automated detection and segmentation of early gastric cancer from endoscopic images using mask R-CNN. J Applied Sciences 10(11): 3842, 2020CrossRef Shibata T, Teramoto A, Yamada H, Ohmiya N, and Fujita H: Automated detection and segmentation of early gastric cancer from endoscopic images using mask R-CNN. J Applied Sciences 10(11): 3842, 2020CrossRef
12.
Zurück zum Zitat Wang R, Zhang W, Nie W, Yu Y: Gastric Polyps Detection by Improved Faster CNN. Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition 128–133, 2019 Wang R, Zhang W, Nie W, Yu Y: Gastric Polyps Detection by Improved Faster CNN. Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition 128–133, 2019
13.
Zurück zum Zitat Cao C, Wang R, Yu Y, Zhang H, Yu Y, and Sun C: Gastric polyp detection in gastroscopic images using deep neural network. PloS one 16(4), 2021 Cao C, Wang R, Yu Y, Zhang H, Yu Y, and Sun C: Gastric polyp detection in gastroscopic images using deep neural network. PloS one 16(4), 2021
14.
Zurück zum Zitat Redmon J, Farhadi A: YOLOv3: An Incremental Improvement. Eprint Arxiv, arXiv: 1804.02767 v1, 2018 Redmon J, Farhadi A: YOLOv3: An Incremental Improvement. Eprint Arxiv, arXiv: 1804.02767 v1, 2018
15.
Zurück zum Zitat Zhu Y, Wang QC, Xu MD: Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. J Gastrointestinal endoscopy 89(4): 806-815,2019CrossRef Zhu Y, Wang QC, Xu MD: Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. J Gastrointestinal endoscopy 89(4): 806-815,2019CrossRef
16.
Zurück zum Zitat Ronneberger O, Fischer P, Brox T: U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Cham pp. 234–241, 2015 Ronneberger O, Fischer P, Brox T: U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Cham pp. 234–241, 2015
17.
Zurück zum Zitat Long J, Shelhamer E, Darrell T: Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(4):640-651, 2015 Long J, Shelhamer E, Darrell T: Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(4):640-651, 2015
18.
Zurück zum Zitat Oktay O, Schlemper J, and Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, Mcdonagh S, Hammerla NY, and Kainz B: Attention U-Net: Learning Where to Look for the Pancreas. arXiv: Computer Vision and Pattern Recognition, 2018 Oktay O, Schlemper J, and Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, Mcdonagh S, Hammerla NY, and Kainz B: Attention U-Net: Learning Where to Look for the Pancreas. arXiv: Computer Vision and Pattern Recognition, 2018
19.
Zurück zum Zitat Chen W, Zhang Y, He J, Qiao Y, Chen Y, Shi H, and Tang X: Prostate Segmentation using 2D Bridged U-net. arXiv: 1807.04459, 2018 Chen W, Zhang Y, He J, Qiao Y, Chen Y, Shi H, and Tang X: Prostate Segmentation using 2D Bridged U-net. arXiv: 1807.04459, 2018
20.
Zurück zum Zitat Liu Z, Song YQ, Sheng VS, Wang L, Jiang R, Zhang X, and Yuan D: Liver CT sequence segmentation based with improved U-Net and graph cut. Expert Systems with Applications 126(JUL.): 54–63, 2019 Liu Z, Song YQ, Sheng VS, Wang L, Jiang R, Zhang X, and Yuan D: Liver CT sequence segmentation based with improved U-Net and graph cut. Expert Systems with Applications 126(JUL.): 54–63, 2019
21.
Zurück zum Zitat Moradi, S, Ghelich-Oghli M, Alizadehasl A, Shiri I, Oveisi N, and Oveisi M: MFP-U-Net: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica 58–69,2019 Moradi, S, Ghelich-Oghli M, Alizadehasl A, Shiri I, Oveisi N, and Oveisi M: MFP-U-Net: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica 58–69,2019
22.
Zurück zum Zitat Lin TY, Dollar P, Girshick R, He K, Hariharan B, and Belongie S: Feature Pyramid Networks for Object Detection. Computer Vision and Pattern Recognition 936–944, 2017 Lin TY, Dollar P, Girshick R, He K, Hariharan B, and Belongie S: Feature Pyramid Networks for Object Detection. Computer Vision and Pattern Recognition 936–944, 2017
23.
Zurück zum Zitat He KM, Zhang XY, Ren SQ, Sun J: Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision & Pattern Recognition. IEEE Computer Society, 2015 He KM, Zhang XY, Ren SQ, Sun J: Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision & Pattern Recognition. IEEE Computer Society, 2015
24.
Zurück zum Zitat Yin XH, Wang YC, Li DY: Review of medical image segmentation technology based on improved u-net structure. J Journal of software 32 (02): 519-550, 2021 Yin XH, Wang YC, Li DY: Review of medical image segmentation technology based on improved u-net structure. J Journal of software 32 (02): 519-550, 2021
25.
Zurück zum Zitat Chen LC, Papandreou G, Schroff F, Adam H: Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv: 1706. 05587v3, 2017 Chen LC, Papandreou G, Schroff F, Adam H: Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv: 1706. 05587v3, 2017
26.
Zurück zum Zitat Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, and Bengio Y: Generative Adversarial Networks. Advances in Neural Information Processing Systems 3:2672-2680, 2014 Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, and Bengio Y: Generative Adversarial Networks. Advances in Neural Information Processing Systems 3:2672-2680, 2014
27.
Zurück zum Zitat Moeskops P, Veta M, Lafarge MW, Eppenhof KA, Pluim JPW: Adversarial Training and Dilated Convolutions for Brain MRI Segmentation. International Workshop on Deep Learning in Medical Image Analysis International Workshop on Multimodal Learning for Clinical Decision Support, DLMIA 2017 Moeskops P, Veta M, Lafarge MW, Eppenhof KA, Pluim JPW: Adversarial Training and Dilated Convolutions for Brain MRI Segmentation. International Workshop on Deep Learning in Medical Image Analysis International Workshop on Multimodal Learning for Clinical Decision Support, DLMIA 2017
28.
Zurück zum Zitat Xue Y, Xu T, Zhang H, Long LR, and Huang X. SegAN: Adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics 16-(3–4):383–392, 2018 Xue Y, Xu T, Zhang H, Long LR, and Huang X. SegAN: Adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics 16-(3–4):383–392, 2018
29.
Zurück zum Zitat Rezaei M, Yang H, Harmuth K, and Meinel C: Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019 Rezaei M, Yang H, Harmuth K, and Meinel C: Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019
30.
Zurück zum Zitat Liu SP, Hong JM, Liang JP, Jia XP, OuYang J, and Yin J: Semi-supervised conditional generation antagonism network for medical image segmentation. J Journal of software v.31 (08): 310-324, 2020 Liu SP, Hong JM, Liang JP, Jia XP, OuYang J, and Yin J: Semi-supervised conditional generation antagonism network for medical image segmentation. J Journal of software v.31 (08): 310-324, 2020
31.
Zurück zum Zitat Gao YH, Huang R, Yang YW, Zhang J, Shao KN, Tao CJ, Chen YY, Metaxas DN, Li HS, and Chen M: FocusNetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck CT images. J Medical Image Analysis 67, 2021 Gao YH, Huang R, Yang YW, Zhang J, Shao KN, Tao CJ, Chen YY, Metaxas DN, Li HS, and Chen M: FocusNetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck CT images. J Medical Image Analysis 67, 2021
32.
Zurück zum Zitat Isola P, Zhu JY, Zhou T, Efros AA: Image-to-Image Translation with Conditional Adversarial Networks. IEEE Conference on Computer Vision & Pattern Recognition. IEEE, 2016 Isola P, Zhu JY, Zhou T, Efros AA: Image-to-Image Translation with Conditional Adversarial Networks. IEEE Conference on Computer Vision & Pattern Recognition. IEEE, 2016
33.
Zurück zum Zitat Mirza M, Osindero S: Conditional Generative Adversarial Nets. J Computer Science 2672–2680, 2014 Mirza M, Osindero S: Conditional Generative Adversarial Nets. J Computer Science 2672–2680, 2014
34.
Zurück zum Zitat Zhou Z, Siddiquee M, Tajbakhsh N, Liang J:UNet++: A Nested U-Net Architecture for Medical Image Segmentation. 4th Deep Learning in Medical Image Analysis (DLMIA) Workshop, 2018 Zhou Z, Siddiquee M, Tajbakhsh N, Liang J:UNet++: A Nested U-Net Architecture for Medical Image Segmentation. 4th Deep Learning in Medical Image Analysis (DLMIA) Workshop, 2018
35.
Zurück zum Zitat Huang H, Lin L, Tong R, Hu H, and Wu J: U-Net 3+: A Full-Scale Connected U-Net for Medical Image Segmentation. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, 2020 Huang H, Lin L, Tong R, Hu H, and Wu J: U-Net 3+: A Full-Scale Connected U-Net for Medical Image Segmentation. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, 2020
Metadaten
Titel
Lesion Segmentation in Gastroscopic Images Using Generative Adversarial Networks
verfasst von
Yaru Sun
Yunqi Li
Pengfei Wang
Dongzhi He
Zhiqiang Wang
Publikationsdatum
07.02.2022
Verlag
Springer International Publishing
Erschienen in
Journal of Imaging Informatics in Medicine / Ausgabe 3/2022
Print ISSN: 2948-2925
Elektronische ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-022-00591-1

Neu im Fachgebiet Radiologie

Abdominale CT bei Kindern: 40% mit Zufallsbefunden

Wird bei Kindern mit stumpfem Trauma eine CT des Bauchraums veranlasst, sind in rund 40% der Fälle Auffälligkeiten zu sehen, die nichts mit dem Trauma zu tun haben. Die allerwenigsten davon sind klinisch relevant.

Genügt die biparametrische MRT für die Prostatadiagnostik?

Die multiparametrische Magnetresonanztomografie hat einen festen Platz im Screening auf klinisch signifikante Prostatakarzinome. Ob auch ein biparametrisches Vorgehen ausreicht, ist in einer Metaanalyse untersucht worden.

Höhere Trefferquoten bei Brustkrebsscreening dank KI?

Künstliche Intelligenz unterstützt bei der Auswertung von Mammografie-Screenings und senkt somit den Arbeitsaufwand für Radiologen. Wie wirken sich diese Technologien auf die Trefferquote und die Falsch-positiv-Rate aus? Das hat jetzt eine Studie aus Schweden untersucht.

KI-gestütztes Mammografiescreening überzeugt im Praxistest

Mit dem Einsatz künstlicher Intelligenz lässt sich die Detektionsrate im Mammografiescreening offenbar deutlich steigern. Mehr unnötige Zusatzuntersuchungen sind laut der Studie aus Deutschland nicht zu befürchten.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.