Image Prediction of Exact Science and Social Science Learning Content with Convolutional Neural Network

- Mambang - Sari Mulia University, Banjarmasin, Indonesia
Finki Dona Marleny - University of Muhammadiyah Banjarmasin, Indonesia

Citation Format:



Learning content can be identified through text, images, and videos. This study aims to predict the learning content contained on YouTube. The images used are images contained in the learning content of the exact sciences, such as mathematics, and social science fields, such as culture. Prediction of images on learning content is done by creating a model on CNN. The collection of datasets carried out on learning content is found on YouTube. The first assessment was performed with an RMSProp optimizer with a learning rate of 0.001, which is used for all optimizers. Several other optimizers were used in this experiment, such as Adam, Nadam, SGD, Adamax, Adadelta, Adagrad, and Ftrl. The CNN model used in the dataset training process tested the image with multiple optimizers and obtained high accuracy results on RMSprop, Adam, and Adamax. There are still many shortcomings in the experiments we conducted in this study, such as not using the momentum component. The momentum component is carried out to improve the speed and quality of neural networks. We can develop a CNN model using the momentum component to obtain good training results and accuracy in later studies. All optimizers contained in Keras and TensorFlow can be used as a comparison. This study concluded that images of learning content on YouTube could be modeled and classified. Further research can add image variables and a momentum component in the testing of CNN models.


Image; exact and non-exact; learning content; CNN; deep learning.

Full Text:



P. D. Rebeca, A. Fern, and M. C. Rodr, “Integrating micro-learning content in traditional e-learning platforms,†Multimed. Tools Appl., vol. 80, pp. 3121–3151, 2020.

B. Mccartney, B. Devereux, and J. Martinez-del-rincon, “A zero-shot deep metric learning approach to Brain – Computer Interfaces for image retrieval,†Knowledge-Based Syst., vol. 246, 2022.

Y. Matsuo, Y. Lecun, M. Sahani, D. Precup, and D. Silver, “Deep learning , reinforcement learning , and world models,†Neural Networks, vol. 152, pp. 267–275, 2022.

A. Eko, M. Hazmi, C. Mandiri, Y. Azhar, and F. Bimantoro, “Classification of Diabetic Retinopathy Disease Using Convolutional Neural Network,†Int. J. INFORMATICS Vis., vol. 6, no. March, pp. 12–18, 2022.

R. Bina et al., “Cataract Classification Based on Fundus Images Using Convolutional Neural Network,†Int. J. INFORMATICS Vis., vol. 6, no. March, pp. 33–38, 2022.

S. Aulia and D. Rahmat, “Brain Tumor Identification Based on VGG-16 Architecture and CLAHE Method,†Int. J. INFORMATICS Vis., vol. 6, no. March, pp. 96–102, 2022.

X. Xia and W. Qi, “Artificial Intelligence Temporal tracking and early warning of multi semantic features of learning behavior,†Comput. Educ. Artif. Intell., vol. 3, no. August 2021, p. 100045, 2022.

S. Minn, “AI-assisted knowledge assessment techniques for adaptive learning environments,†Comput. Educ. Artif. Intell., vol. 3, no. July 2021, p. 100050, 2022.

S. Liu et al., “Tool path planning of consecutive free-form sheet metal stamping with deep learning,†J. Mater. Process. Tech., vol. 303, no. February, p. 117530, 2022.

P. M. Blok, G. Kootstra, H. Elchaoui, B. Diallo, F. K. Van Evert, and E. J. Van Henten, “Active learning with MaskAL reduces annotation effort for training Mask R-CNN on a broccoli dataset with visually similar classes,†Comput. Electron. Agric., vol. 197, no. December 2021, p. 106917, 2022.

M. P. Islam, K. Hatou, T. Aihara, S. Seno, S. Kirino, and S. Okamoto, “Performance prediction of tomato leaf disease by a series of parallel convolutional neural networks,†Smart Agric. Technol., vol. 2, no. March, p. 100054, 2022.

W. Huang, M. Svanborg, M. Juul, and M. Toudal, “The application of convolutional neural networks for tomographic reconstruction of hyperspectral images,†Displays, vol. 74, no. January, p. 102218, 2022.

S. Zamboni, Z. Tilahun, S. Girdzijauskas, C. Norén, and L. Dal, “Pedestrian trajectory prediction with convolutional neural networks,†Pattern Recognit., vol. 121, p. 108252, 2022.

A. La et al., “A 2 . 5D convolutional neural network for HPV prediction in advanced oropharyngeal cancer,†Comput. Biol. Med., vol. 142, p. 105215, 2022.

N. Singh, V. K. Tewari, P. K. Biswas, L. K. Dhruw, C. M. Pareek, and H. D. Singh, “Semantic segmentation of in-field cotton bolls from the sky using deep convolutional neural networks,†Smart Agric. Technol., vol. 2, no. March, p. 100045, 2022.

M. Yang et al., “Detecting and mapping tree crowns based on convolutional neural network and Google Earth images,†Int. J. Appl. Earth Obs. Geoinf., vol. 108, no. March, p. 102764, 2022.

A. S. Paymode and V. B. Malode, “Transfer Learning for Multi-Crop Leaf Disease Image Classification using Convolutional Neural Network VGG,†Artif. Intell. Agric., vol. 6, pp. 1–11, 2022.

H. Trang, N. Thanh, and D. Hwang, “Convolutional attention neural network over graph structures for improving the performance of aspect-level sentiment analysis,†Inf. Sci. (Ny)., vol. 589, pp. 416–439, 2022.

I. Rodriguez-martinez, J. Lafuente, and R. H. N. Santiago, “Replacing pooling functions in Convolutional Neural Networks by linear combinations of increasing functions,†Neural Networks, vol. 152, pp. 380–393, 2022.

H. Min, T. Ko, I. Young, and J. Myong, “Asbestosis diagnosis algorithm combining the lung segmentation method and deep learning model in computed tomography image,†Int. J. Med. Inform., vol. 158, p. 104667, 2022.

F. Chen and J. Yeu, “Assessing the effects of convolutional neural network architectural factors on model performance for remote sensing image classification: An in-depth investigation,†Int. J. Appl. Earth Obs. Geoinf., vol. 112, p. 102865, 2022.

L. Ruo, H. Kamaludin, N. Zuraidin, M. Safar, N. Wahid, and N. Abdullah, “Intelligence Eye for Blinds and Visually Impaired by Using Region- Based Convolutional Neural Network ( R-CNN ),†Int. J. INFORMATICS Vis., vol. 5, no. December, pp. 409–414, 2021.