Intelligence Eye for Blinds and Visually Impaired by Using Region-Based Convolutional Neural Network (R-CNN)

Lee Ruo Yee - Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Hazalila Kamaludin - Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Noor Zuraidin Mohd Safar - Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Norfaradilla Wahid - Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Noryusliza Abdullah - Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Dwiny Meidelfi - Department of Information Technology, Politeknik Negeri Padang, West Sumatera, Indonesia


Citation Format:



DOI: http://dx.doi.org/10.30630/joiv.5.4.735

Abstract


Intelligence Eye is an Android based mobile application developed to help blind and visually impaired users to detect light and objects. Intelligence Eye used Region-based Convolutional Neural Networks (R-CNN) to recognize objects in the object recognition module and a vibration feedback is provided according to the light value in the light detection module. A voice guidance is provided in the application to guide the users and announce the result of the object recognition. TensorFlow Lite is used to train the neural network model for object recognition in conjunction with extensible markup language (XML) and Java in Android Studio for the programming language. For future works, improvements can be made to enhance the functionality of the Intelligence Eye application by increasing the object detection capacity in the object recognition module, add menu settings for vibration intensity in light detection module and support multiple languages for the voice guidance.

Keywords


Mobile application; Android; light detection; object recognition.

Full Text:

PDF

References


World Health Organization, “Visual Impairment and Blindness 2010,†World Heal. Organ., 2012.

J. S. Sierra and J. S. R. De Togores, “Designing mobile apps for visually impaired and blind users: Using touch screen based mobile devices: IPhone/iPad,†in ACHI 2012 - 5th International Conference on Advances in Computer-Human Interactions, 2012.

E. Chen, Y. Lin, C. H. Chen, and I. F. Wang, “BlindNavi: A navigation app for the visually impaired smartphone user.,†in Conference on Human Factors in Computing Systems Proceedings., 2015.

R. Tapu, B. Mocanu, A. Bursuc, and T. Zaharia, “A Smartphone-Based Obstacle Detection and Classification System for Assisting Visually Impaired People,†in Proceedings of the IEEE International Conference on Computer Vision, 2013.

N. Hashim, M. Saleh Ba Matraf and A. Hussain, "Identifying the Requirements of Visually Impaired Users for Accessible Mobile Ebook Applications'', JOIV : International Journal on Informatics Visualization, vol. 5, no. 2, pp. 99-104, 2021.

T. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, and J. Hays, “Microsoft COCO: Common Objects in Context,†2015.

C. McIntosh, Cambridge Advanced Learner’s Dictionary, 5th ed. Cambridge, United Kingdom: Cambridge University Press, 2013.

Oxford, Oxford Student’s Dictionary, 8th ed. Cambridge, United Kingdom: Oxford University Press, 2016.

World Health Organization, “Change the Definition of Blindness,†World Heal. Organ., 2015.

H. Hollands and J. C., “The Prevalence of Low Vision and Blindness in Canada,†Eye 20, pp. 341–346, 2016.

M. Deuter and J. Bradbery, Oxford Advanced Learner’s Dictionary. Oxford, United Kingdom: Oxford University Press, 2014.

P. Domingos, “A Few Useful Things to Know about Machine Learning,†2016.

M. Abadi et al., “TensorFlow: A System for Large-Scale Machine Learning.,†2016.

“TensorFlow Lite.†https://www.tensorflow.org/lite (accessed Oct. 05, 2019).

A. Uçar, Y. Demir, and C. Güzeliş, “Object Recognition and Detection with Deep Learning for Autonomous Driving Applications,†2017.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,†2014.

S. Asht and R. Dass, “Pattern Recognition Techniques: A Review,†2012.

H. Wu, Y. Zhou, Q. Luo, and M. A. Basset, “Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm,†Comput. Intell. Neurosci., 2016.

K. Dhondge, B. Choi, S. Song, and H. Park, “Optical Wireless Authentication for Smart Devices Using an Onboard Ambient Light Sensor.,†in 23rd International Conference on Computer Communication and Networks (ICCCN), 2014.

“TapTapSee.†https://taptapseeapp.com/ (accessed Oct. 05, 2019).

“Seeing Assistant Home Lite.†http://seeingassistant.tt.com.pl/ (accessed Oct. 05, 2019).

“Microsoft.†https://www.microsoft.com/en-us/ai/seeing-ai (accessed Oct. 05, 2019).

B. Leporini and M. Buzzi, “Home Automation for an Independent Living: Investigating the Needs of Visually Impaired People,†2018.

S. K. Dora and P. . Dubey, “Software Development Life Cycle (SDLC) Analytical Comparison and Survey on Traditional and Agile Methodology,†Natl. Mon. Ref. J. Res. Sci. Technol., 2013.

F. Alhumaidan, “A Critical Analysis and Treatment of Important UML Diagrams Enhancing Modeling Power,†2012.