Ultra-wide-field Fundus Image Synthesis Using Various GAN Models

Ara Ko - Jeju National University, 102 Jejudaehak ro, Jeju 63243, Republic of Korea
Jungwon Cho - Jeju National University, 102 Jejudaehak ro, Jeju 63243, Republic of Korea

Citation Format:

DOI: http://dx.doi.org/10.30630/joiv.6.3.1256


Many people lose sight due to diabetic retinopathy. The reason that diabetic retinopathy is dangerous is that it cannot return to its pre-onset state after the disease's onset. Most patients take fundus images that capture the retina, and the doctor uses the fundus images to determine the presence of disease. Existing fundus images could only identify a narrow range, making it difficult to diagnose the disease accurately. However, with technological advances, ultra-wide-field fundus images that allow the wider retina to be seen have emerged. However, in deep learning research, many studies use existing fundus images due to the lack of new data. In the case of new technologies such as ultra-wide-field fundus images, it was often difficult to obtain data, so deep learning research could not be done properly. In the case of ultra-wide-field fundus images, research was conducted using data from hundreds to ten thousand sheets, but compared to large-scale data sets, the deep learning performance is inevitably inferior compared to large-scale data sets. In this study, synthetic data were created using ultra-wide-field fundus images and various GAN models to solve this problem. As a result of the study, BEGAN was derived similarly to the real image in qualitative and quantitative evaluation. However, it fell into mode collapse and showed the same output even when a new input came in. Mode collapse in BEGAN could be appeared depending on the amount and size of data, so various studies using BEGAN are needed.


GAN; generative adversarial networks; ultra-wide-field fundus image; diabetic retinopathy; deep learning.

Full Text:



Health Chosun. (2017) 3 major blindness diseases You should not rest assured that you are young. [Online]. Available: https://health.chosun.com/site/data/html_dir/2017/02/01/2017020101123.html. [Accessed: Aug. 15, 2022]

Kong Ophthalmology, “Special Feature: Diabetes mellitus and eye diseases -Diabetic retinopathy easily understood through cartoons,†A Monthly Diabetes Magazine, vol.2011, no.2, pp.12-17, 2011.

V. Gulshan et al., "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs," (in eng), Jama, vol. 316, no. 22, pp. 2402-2410, Dec 13 2016, doi: 10.1001/jama.2016.17216.

K. Ghasemi Falavarjani, I. Tsui, and S. R. Sadda, "Ultra-wide-field imaging in diabetic retinopathy," (in eng), Vision Res, vol. 139, pp. 187-190, Oct 2017, doi: 10.1016/j.visres.2017.02.009.

T. Nagasawa et al., "Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy," (in eng), Int Ophthalmol, vol. 39, no. 10, pp. 2153-2159, Oct 2019, doi: 10.1007/s10792-019-01074-z.

K. Oh, H. M. Kang, D. Leem, H. Lee, K. Y. Seo, and S. Yoon, "Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images," (in eng), Sci Rep, vol. 11, no. 1, p. 1897, Jan 21 2021, doi: 10.1038/s41598-021-81539-3.

T. Nagasawa et al., "Accuracy of Diabetic Retinopathy Staging with a Deep Convolutional Neural Network Using Ultra-Wide-Field Fundus Ophthalmoscopy and Optical Coherence Tomography Angiography," (in eng), J Ophthalmol, vol. 2021, p. 6651175, 2021, doi: 10.1155/2021/6651175.

I. Goodfellow et al., "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014.

L. Ju et al., "Bridge the domain gap between ultra-wide-field and traditional fundus images via adversarial domain adaptation," arXiv preprint arXiv:2003.10042, 2020.

P. Costa et al., "End-to-end adversarial retinal image synthesis," IEEE transactions on medical imaging, vol. 37, no. 3, pp. 781-791, 2017.

H. Zhao, H. Li, S. Maurer-Stroh, and L. Cheng, "Synthesizing retinal and neuronal images with generative adversarial nets," Medical image analysis, vol. 49, pp. 14-26, 2018.

T. Iqbal and H. Ali, "Generative adversarial network for medical images (MI-GAN)," Journal of medical systems, vol. 42, no. 11, pp. 1-11, 2018.

S. Biswas, J. Rohdin, and M. Drahanský, "Synthetic retinal images from unconditional GANs," in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019: IEEE, pp. 2736-2739.

A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2015.

X. Yi, E. Walia, and P. Babyn, "Generative adversarial network in medical imaging: A review," Medical image analysis, vol. 58, p. 101552, 2019.

M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative adversarial networks," in International conference on machine learning, 2017: PMLR, pp. 214-223.

I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of wasserstein gans," Advances in neural information processing systems, vol. 30, 2017.

D. Berthelot, T. Schumm, and L. Metz, "Began: Boundary equilibrium generative adversarial networks," arXiv preprint arXiv:1703.10717, 2017.

T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training gans," Advances in neural information processing systems, vol. 29, 2016.

M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "Gans trained by a two time-scale update rule converge to a local nash equilibrium," Advances in neural information processing systems, vol. 30, 2017.

C.-C. Chang, C. H. Lin, C.-R. Lee, D.-C. Juan, W. Wei, and H.-T. Chen, "Escaping from collapsing modes in a constrained space," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 204-219.