Skip to main content
Erschienen in: Graefe's Archive for Clinical and Experimental Ophthalmology 8/2020

02.05.2020 | Retinal Disorders

CycleGAN-based deep learning technique for artifact reduction in fundus photography

verfasst von: Tae Keun Yoo, Joon Yul Choi, Hong Kyu Kim

Erschienen in: Graefe's Archive for Clinical and Experimental Ophthalmology | Ausgabe 8/2020

Einloggen, um Zugang zu erhalten

Abstract

Purpose

A low quality of fundus photograph with artifacts may lead to false diagnosis. Recently, a cycle-consistent generative adversarial network (CycleGAN) has been introduced as a tool to generate images without matching paired images. Therefore, herein, we present a deep learning technique that removes the artifacts automatically in a fundus photograph using a CycleGAN model.

Methods

This study included a total of 2206 anonymized retinal images including 1146 with artifacts and 1060 without artifacts. In this experiment, we applied the CycleGAN model to color fundus photographs with a pixel resolution of 256 × 256 × 3. To apply the CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. Additionally, we adopted the automated quality evaluation (AQE) to assess the retinal image quality.

Results

From the results, we observed that the artifacts such as overall haze, edge haze, lashes, arcs, and uneven illumination were successfully reduced by the CycleGAN in the generated images, and the main information of the retina was essentially retained. Further, we observed that most of the generated images exhibited improved AQE grade values when compared with the original images with artifacts.

Conclusion

Thus, we could conclude that the CycleGAN technique can effectively reduce the artifacts and improve the quality of fundus photographs, and it may be beneficial for clinicians in analyzing the low-quality fundus photographs. Future studies should improve the quality and resolution of the generated image to provide a more detailed fundus photography.
Anhänge
Nur mit Berechtigung zugänglich
Literatur
4.
Zurück zum Zitat Mora AD, Soares J, Fonseca JM (2013) A template matching technique for artifacts detection in retinal images. In: 2013 8th international symposium on image and signal processing and analysis (ISPA). pp 717–722 Mora AD, Soares J, Fonseca JM (2013) A template matching technique for artifacts detection in retinal images. In: 2013 8th international symposium on image and signal processing and analysis (ISPA). pp 717–722
5.
Zurück zum Zitat Gondara L (2016) Medical image denoising using convolutional denoising autoencoders. In: 2016 IEEE 16th international conference on data mining workshops (ICDMW). pp 241–246 Gondara L (2016) Medical image denoising using convolutional denoising autoencoders. In: 2016 IEEE 16th international conference on data mining workshops (ICDMW). pp 241–246
8.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial nets. In: Advances in neural information processing systems. pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial nets. In: Advances in neural information processing systems. pp 2672–2680
9.
Zurück zum Zitat Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134 Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134
10.
Zurück zum Zitat Liu Y, Khosravan N, Liu Y et al (2019) Cross-modality knowledge transfer for prostate segmentation from CT scans. In: Wang Q, Milletari F, Nguyen HV et al (eds) Domain adaptation and representation transfer and medical image learning with less labels and imperfect data. Springer International Publishing, Cham, pp 63–71CrossRef Liu Y, Khosravan N, Liu Y et al (2019) Cross-modality knowledge transfer for prostate segmentation from CT scans. In: Wang Q, Milletari F, Nguyen HV et al (eds) Domain adaptation and representation transfer and medical image learning with less labels and imperfect data. Springer International Publishing, Cham, pp 63–71CrossRef
11.
Zurück zum Zitat Liu Y, Guo Y, Chen W, Lew MS (2018) An extensive study of cycle-consistent generative networks for image-to-image translation. In: 2018 24th international conference on pattern recognition (ICPR). pp 219–224 Liu Y, Guo Y, Chen W, Lew MS (2018) An extensive study of cycle-consistent generative networks for image-to-image translation. In: 2018 24th international conference on pattern recognition (ICPR). pp 219–224
14.
Zurück zum Zitat Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp 2223–2232 Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp 2223–2232
17.
Zurück zum Zitat Sang J, Lei Z, Li SZ (2009) Face image quality evaluation for ISO/IEC standards 19794-5 and 29794-5. In: Tistarelli M, Nixon MS (eds) Advances in biometrics. Springer, Berlin, pp 229–238CrossRef Sang J, Lei Z, Li SZ (2009) Face image quality evaluation for ISO/IEC standards 19794-5 and 29794-5. In: Tistarelli M, Nixon MS (eds) Advances in biometrics. Springer, Berlin, pp 229–238CrossRef
18.
Zurück zum Zitat You Q, Wan C, Sun J, et al (2019) Fundus image enhancement method based on CycleGAN. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). pp 4500–4503 You Q, Wan C, Sun J, et al (2019) Fundus image enhancement method based on CycleGAN. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). pp 4500–4503
21.
Zurück zum Zitat Köhler T, Hornegger J, Mayer M, Michelson G (2012) Quality-guided denoising for low-cost fundus imaging. In: Tolxdorff T, Deserno TM, Handels H, Meinzer H-P (eds) Bildverarbeitung für die Medizin 2012: Algorithmen - Systeme - Anwendungen. Proceedings des workshops vom 18. bis 20. März 2012 in Berlin. Springer, Berlin, Heidelberg, pp 292–297 Köhler T, Hornegger J, Mayer M, Michelson G (2012) Quality-guided denoising for low-cost fundus imaging. In: Tolxdorff T, Deserno TM, Handels H, Meinzer H-P (eds) Bildverarbeitung für die Medizin 2012: Algorithmen - Systeme - Anwendungen. Proceedings des workshops vom 18. bis 20. März 2012 in Berlin. Springer, Berlin, Heidelberg, pp 292–297
24.
Zurück zum Zitat Wolterink JM, Dinkla AM, Savenije MHF et al (2017) Deep MR to CT synthesis using unpaired data. In: Tsaftaris SA, Gooya A, Frangi AF, Prince JL (eds) Simulation and synthesis in medical imaging. Springer International Publishing, Cham, pp 14–23CrossRef Wolterink JM, Dinkla AM, Savenije MHF et al (2017) Deep MR to CT synthesis using unpaired data. In: Tsaftaris SA, Gooya A, Frangi AF, Prince JL (eds) Simulation and synthesis in medical imaging. Springer International Publishing, Cham, pp 14–23CrossRef
25.
Zurück zum Zitat Wang L, Xu X, Yu Y et al (2019) SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access 7:129136–129149CrossRef Wang L, Xu X, Yu Y et al (2019) SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access 7:129136–129149CrossRef
29.
Zurück zum Zitat Zhang Z, Yang L, Zheng Y (2018) Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. pp 9242–9251 Zhang Z, Yang L, Zheng Y (2018) Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. pp 9242–9251
Metadaten
Titel
CycleGAN-based deep learning technique for artifact reduction in fundus photography
verfasst von
Tae Keun Yoo
Joon Yul Choi
Hong Kyu Kim
Publikationsdatum
02.05.2020
Verlag
Springer Berlin Heidelberg
Erschienen in
Graefe's Archive for Clinical and Experimental Ophthalmology / Ausgabe 8/2020
Print ISSN: 0721-832X
Elektronische ISSN: 1435-702X
DOI
https://doi.org/10.1007/s00417-020-04709-5

Weitere Artikel der Ausgabe 8/2020

Graefe's Archive for Clinical and Experimental Ophthalmology 8/2020 Zur Ausgabe

Neu im Fachgebiet Augenheilkunde

Update Augenheilkunde

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.