Boosting Vehicle Classification with Augmentation Techniques across Multiple YOLO Versions

Shao Xian Tan - Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia
Jia You Ong - Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia
Kah Ong Michael Goh - Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia
Connie Tee - Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia


Citation Format:



DOI: http://dx.doi.org/10.62527/joiv.8.1.2313

Abstract


In recent years, computer vision has experienced a surge in applications across various domains, including product and quality inspection, automatic surveillance, and robotics. This study proposes techniques to enhance vehicle object detection and classification using augmentation methods based on the YOLO (You Only Look Once) network. The primary objective of the trained model is to generate a local vehicle detection system for Malaysia which have the capacity to detect vehicles manufactured in Malaysia, adapt to the specific environmental factors in Malaysia, and accommodate varying lighting conditions prevalent in Malaysia. The dataset used for this paper to develop and evaluate the proposed system was provided by a highway company, which captured a comprehensive top-down view of the highway using a surveillance camera. Rigorous manual annotation was employed to ensure accurate annotations within the dataset. Various image augmentation techniques were also applied to enhance the dataset's diversity and improve the system's robustness. Experiments were conducted using different versions of the YOLO network, such as YOLOv5, YOLOv6, YOLOv7, and YOLOv8, each with varying hyperparameter settings. These experiments aimed to identify the optimal configuration for the given dataset. The experimental results demonstrated the superiority of YOLOv8 over other YOLO versions, achieving an impressive mean average precision of 97.9% for vehicle detection. Moreover, data augmentation effectively solves the issues of overfitting and data imbalance while providing diverse perspectives in the dataset. Future research can focus on optimizing computational efficiency for real-time applications and large-scale deployments.

Keywords


Vehicle detection; vehicle classification; object detection; YOLO; computer vision

Full Text:

PDF

References


P. K. Bhaskar and S.-P. Yong, “Image processing based vehicle detection and tracking method,” in 2014 International Conference on Computer and Information Sciences (ICCOINS), IEEE, Jun. 2014, pp. 1–5. doi: 10.1109/ICCOINS.2014.6868357.

Othman Omran Khalifa, Sheroz Khan, Md. Rafiqul Islam, and Ahmad Suleiman, “Malaysia Vehicle License Plate Recognition,” The International Arab Journal of Information Technology, pp. 359–364, 2007, Accessed: Jan. 23, 2024. [Online]. Available: https://iajit.org/PDF/vol.4,no.4/10-Othman.pdf

Cui, Wang, Wang, Liu, Yuan, and Wang, “Preceding Vehicle Detection Using Faster R-CNN Based on Speed Classification Random Anchor and Q-Square Penalty Coefficient,” Electronics (Basel), vol. 8, no. 9, p. 1024, 2019, doi: 10.3390/electronics8091024.

Z. Rahman, A. M. Ami, and M. A. Ullah, “A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid Tracking,” in 2020 IEEE Region 10 Symposium (TENSYMP), IEEE, 2020, pp. 916–920. doi: 10.1109/TENSYMP50017.2020.9230463.

J. J. Ng, K. O. M. Goh, and C. Tee, “Traffic Impact Assessment System using Yolov5 and ByteTrack,” Journal of Informatics and Web Engineering, vol. 2, no. 2, pp. 168–188, 2023, doi: 10.33093/jiwe.2023.2.2.13.

D. Zhao, Y. Chen, and L. Lv, “Deep Reinforcement Learning With Visual Attention for Vehicle Classification,” IEEE Trans Cogn Dev Syst, vol. 9, no. 4, pp. 356–367, 2017, doi: 10.1109/tcds.2016.2614675.

R. Cheng, RuiJingXiaoQu, SongMen, and WenLing, “A survey: Comparison between Convolutional Neural Network and YOLO in image identification,” J Phys Conf Ser, vol. 1453, no. 1, p. 012139, 2020, doi: 10.1088/1742-6596/1453/1/012139.

Fiza Joiya, “OBJECT DETECTION: YOLO VS FASTER R-CNN,” International Research Journal of Modernization in Engineering Technology and Science, pp. 1911–1915, 2022, doi: 10.56726/irjmets30226.

Viswanatha v., Chandana R K, and Ramachandra Ac, “Real Time Object Detection System with YOLO and CNN Models: A Review,” JOURNAL OF XI AN UNIVERSITY OF ARCHITECTURE & TECHNOLOGY, pp. 144–151, 2022, Accessed: Jan. 23, 2024. [Online]. Available: 10.37896/JXAT14.07/315415

S. Srivastava, A. V. Divekar, C. Anilkumar, I. Naik, V. Kulkarni, and V. Pattabiraman, “Comparative analysis of deep learning image detection algorithms,” J Big Data, vol. 8, no. 1, 2021, doi: 10.1186/s40537-021-00434-w.

R. M. Alamgir et al., “Performance Analysis of YOLO-based Architectures for Vehicle Detection from Traffic Images in Bangladesh,” in 2022 25th International Conference on Computer and Information Technology (ICCIT), IEEE, Dec. 2022, pp. 982–987. doi: 10.1109/ICCIT57492.2022.10055758.

M. A. Bin Zuraimi and F. H. Kamaru Zaman, “Vehicle Detection and Tracking using YOLO and DeepSORT,” in 2021 IEEE 11th IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE), IEEE, Apr. 2021, pp. 23–29. doi: 10.1109/ISCAIE51753.2021.9431784.

B. Neupane, T. Horanont, and J. Aryal, “Real-Time Vehicle Classification and Tracking Using a Transfer Learning-Improved Deep Learning Network,” Sensors (Basel), vol. 22, no. 10, p. 3813, May 2022, doi: 10.3390/s22103813.

G. S. A. Mohammed, N. Mat Diah, Z. Ibrahim, and N. Jamil, “Vehicle detection and classification using three variations of you only look once algorithm,” International Journal of Reconfigurable and Embedded Systems (IJRES), vol. 12, no. 3, p. 442, 2023, doi: 10.11591/ijres.v12.i3.pp442-452.

K. Liu, H. Tang, S. He, Q. Yu, Y. Xiong, and N. Wang, “Performance Validation of Yolo Variants for Object Detection,” in Proceedings of the 2021 International Conference on Bioinformatics and Intelligent Computing, New York, NY, USA: ACM, Jan. 2021, pp. 239–243. doi: 10.1145/3448748.3448786.

N. Aloufi, A. Alnori, V. Thayananthan, and A. Basuhail, “Object Detection Performance Evaluation for Autonomous Vehicles in Sandy Weather Environments,” Applied Sciences, vol. 13, no. 18, p. 10249, 2023, doi: 10.3390/app131810249.

J. Terven, D.-M. Córdova-Esparza, and J.-A. Romero-González, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Mach Learn Knowl Extr, vol. 5, no. 4, pp. 1680–1716, 2023, doi: 10.3390/make5040083.

T. Diwan, G. Anirudh, and J. V Tembhurne, “Object detection using YOLO: challenges, architectural successors, datasets and applications,” Multimed Tools Appl, vol. 82, no. 6, pp. 9243–9275, 2023, doi: 10.1007/s11042-022-13644-y.

T. S. Gunawan, I. M. M. Ismail, M. Kartiwi, and N. Ismail, “Performance Comparison of Various YOLO Architectures on Object Detection of UAV Images,” in 2022 IEEE 8th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA), IEEE, Sep. 2022, pp. 257–261. doi: 10.1109/ICSIMA55652.2022.9928938.

I. S. Gillani et al., “Yolov5, Yolo-x, Yolo-r, Yolov7 Performance Comparison: A Survey,” in Artificial Intelligence and Fuzzy Logic System, Academy and Industry Research Collaboration Center (AIRCC), Sep. 2022, pp. 17–28. doi: 10.5121/csit.2022.121602.

Q. M. Chung, T. D. Le, T. V. Dang, N. D. Vo, T. V. Nguyen, and K. Nguyen, “Data Augmentation Analysis in Vehicle Detection from Aerial Videos,” in 2020 RIVF International Conference on Computing and Communication Technologies (RIVF), IEEE, Oct. 2020, pp. 1–3. doi: 10.1109/RIVF48685.2020.9140740.

S.-Y. Lin and H.-Y. Li, “Integrated Circuit Board Object Detection and Image Augmentation Fusion Model Based on YOLO,” Front Neurorobot, vol. 15, p. 762702, Nov. 2021, doi: 10.3389/fnbot.2021.762702.

G. Dai, L. Hu, and J. Fan, “DA-ActNN-YOLOV5: Hybrid YOLO v5 Model with Data Augmentation and Activation of Compression Mechanism for Potato Disease Identification,” Comput Intell Neurosci, vol. 2022, p. 6114061, Sep. 2022, doi: 10.1155/2022/6114061.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.

A. H. Eichelberger, E. R. Teoh, and A. T. McCartt, “Vehicle choices for teenage drivers: A national survey of U.S. parents,” J Safety Res, vol. 55, pp. 1–5, 2015, doi: 10.1016/j.jsr.2015.07.006.

S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, “Understanding Data Augmentation for Classification: When to Warp?,” in 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Nov. 2016, pp. 1–6. doi: 10.1109/DICTA.2016.7797091.

Glenn Jocher et al., “ultralytics/yolov5: Initial Release,” https://zenodo.org/records/3908560, Jun. 2020, Accessed: Jan. 23, 2024. [Online]. Available: https://doi.org/10.5281/zenodo.4679653

C. Li et al., “YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications,” Sep. 2022, Accessed: Jan. 23, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2209.02976

Chien-Yao Wang, Alexey Bochkovskiy, and Hong-yuan Mark Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Institute of Information Science, Academia Sinica, Taiwan, Jul. 2022, pp. 1–11. Accessed: Jan. 23, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2207.02696

J. Glenn, C. Ayush, and Q. Jing, “YOLO by Ultralytics (Version 8.0.0). Computer software.” Accessed: Jan. 23, 2024. [Online]. Available: https://github.com/ultralytics/ultralytics