Deep Learning-based Utility Pole Safety Assessment from Visual Data
DOI: http://dx.doi.org/10.62527/joiv.8.4.3039
Abstract
Utility poles are crucial infrastructure components, and efficiently assessing the safety of these structures and ensuring they adhere to the clearance guidelines, which specify the minimum distance between the pole and any surrounding objects, remains a challenge; the current manual inspection process is time-consuming, costly, and often subjective. This work proposes an automated deep learning-inspired solution to improve utility pole detection and measure the clearance distance. The biggest challenge was the lack of a comprehensive pole dataset; therefore, we collected a dataset containing utility poles in varied backgrounds, environments, and conditions. We compared data augmentation techniques and employed them to address the limited dataset size. The proposed approach consists of two main stages: pole detection and differentiation and pole distance measurement. The first stage is a comparison of multiple object detection models on our utility pole dataset; we used the results from the best-performing model to estimate the distance between the two pole objects. The results show that our pipeline with the YOLOv8 model outperforms SSD and achieves 83% accuracy in classifying pole compliance. The system can accurately detect and estimate clearance violations even with limited data. The success of the pipeline opens opportunities for future research; obtaining depth by using additional sensors or deep learning models could enhance the detection module. Scaling the approach to large utility pole networks while retaining real-time performance could lead to improved autonomous infrastructure maintenance.
Keywords
Full Text:
PDFReferences
N. Veligura, “The impact of COVID-19 on the Global Telecommunications Industry,” 2020. Accessed: Jan. 07, 2024. [Online]. Available: https://www.ifc.org/en/insights-reports/2020/covid-19-impact-on-the-global-telecommunications-industry
K. Stouffer et al., “NIST Special Publication NIST SP 800-82r3 Guide to Operational Technology (OT) Security”, doi: 10.6028/NIST.SP.800-82r3.
J. Kim, M. Kamari, S. Lee, and Y. Ham, “Large-Scale Visual Data–Driven Probabilistic Risk Assessment of Utility Poles Regarding the Vulnerability of Power Distribution Infrastructure Systems,” J Constr Eng Manag, vol. 147, no. 10, Oct. 2021, doi: 10.1061/(ASCE)CO.1943-7862.0002153.
Y. H. Gan, S. Y. Ooi, Y. H. Pang, Y. H. Tay, and Q. F. Yeo, “Facial Skin Analysis in Malaysians using YOLOv5: A Deep Learning Perspective,” Journal of Informatics and Web Engineering, vol. 3, no. 2, pp. 1–18, Jun. 2024, doi: 10.33093/JIWE.2023.3.2.1.
U. Ali, M. Akmar Ismail, R. Ahamed, A. Habeeb, S. Roshaan, and A. Shah, “Performance Evaluation of YOLO Models in Plant Disease Detection,” Journal of Informatics and Web Engineering, vol. 3, no. 2, pp. 199–211, Jun. 2024, doi: 10.33093/JIWE.2024.3.2.15.
B. Sistaninejhad, H. Rasi, and P. Nayeri, “A Review Paper about Deep Learning for Medical Image Analysis,” Comput Math Methods Med, vol. 2023, 2023, doi: 10.1155/2023/7091301.
Y. Parmar, S. Natarajan, and G. Sobha, “DeepRange: deep-learning-based object detection and ranging in autonomous driving,” IET Intelligent Transport Systems, vol. 13, no. 8, pp. 1256–1264, Aug. 2019, doi: 10.1049/IET-ITS.2018.5144.
W. Lan, J. Dang, Y. Wang, and S. Wang, “Pedestrian detection based on yolo network model,” Proceedings of 2018 IEEE International Conference on Mechatronics and Automation, ICMA 2018, pp. 1547–1551, Oct. 2018, doi: 10.1109/ICMA.2018.8484698.
Y. Xiao et al., “A review of object detection based on deep learning,” Multimed Tools Appl, vol. 79, no. 33–34, pp. 23729–23791, Sep. 2020, doi: 10.1007/s11042-020-08976-6.
J. Luis et al., “Utility-Pole, Nuts and Cross-Arm Visual Detection, for Electric Connections Maintenance Robot,” 2020.
H. Sharma, V. Adithya, T. Dutta, and P. Balamuralidhar, “Image Analysis-Based Automatic Utility Pole Detection for Remote Surveillance,” 2015 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2015, 2015, doi: 10.1109/DICTA.2015.7371267.
S. Sabatini, M. Corno, S. Fiorenti, and S. M. Savaresi, “Vision-Based Pole-Like Obstacle Detection and Localization for Urban Mobile Robots,” IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2018-June, pp. 1209–1214, Oct. 2018, doi: 10.1109/IVS.2018.8500452.
W. Liu et al., “SSD: Single Shot MultiBox Detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, Dec. 2015, doi: 10.1007/978-3-319-46448-0_2.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016, Accessed: Dec. 28, 2023. [Online]. Available: http://pjreddie.com/yolo/
C. J. Ramlal, A. Singh, S. Rocke, H. Manninen, J. Kilter, and M. Landsberg, “Toward automated utility pole condition monitoring: A deep learning approach,” IEEE PES Innovative Smart Grid Technologies Conference Europe, vol. 2020-October, pp. 255–259, Oct. 2020, doi: 10.1109/ISGT-EUROPE47291.2020.9248797.
V. N. Nguyen, R. Jenssen, and D. Roverso, “Intelligent Monitoring and Inspection of Power Line Components Powered by UAVs and Deep Learning,” IEEE Power and Energy Technology Systems Journal, vol. 6, no. 1, pp. 11–21, Jan. 2019, doi: 10.1109/JPETS.2018.2881429.
W. Zhang, C. Witharana, W. Li, C. Zhang, X. Li, and J. Parent, “Using Deep Learning to Identify Utility Poles with Crossarms and Estimate Their Locations from Google Street View Images,” Sensors 2018, Vol. 18, Page 2484, vol. 18, no. 8, p. 2484, Aug. 2018, doi: 10.3390/S18082484.
M. M. Alam, Z. Zhu, B. Eren Tokgoz, J. Zhang, and S. Hwang, “Automatic Assessment and Prediction of the Resilience of Utility Poles Using Unmanned Aerial Vehicles and Computer Vision Techniques,” International Journal of Disaster Risk Science, vol. 11, no. 1, pp. 119–132, Feb. 2020, doi: 10.1007/S13753-020-00254-1/FIGURES/10.
H. Manninen, C. J. Ramlal, A. Singh, J. Kilter, and M. Landsberg, “Multi-stage deep learning networks for automated assessment of electricity transmission infrastructure using fly-by images,” Electric Power Systems Research, vol. 209, p. 107948, Aug. 2022, doi: 10.1016/J.EPSR.2022.107948.
B. Koch, E. Denton, A. Hanna, G. Research, S. Francisco, and J. G. Foster, “Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research”, doi: https://doi.org/10.48550/arXiv.2112.01716.
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” Apr. 2020, doi: https://doi.org/10.48550/arXiv.2004.10934.
L. Nanni, M. Paci, S. Brahnam, and A. Lumini, “Comparison of Different Image Data Augmentation Approaches,” Journal of Imaging 2021, Vol. 7, Page 254, vol. 7, no. 12, p. 254, Nov. 2021, doi: 10.3390/JIMAGING7120254.
K. S. Sudeep and K. K. Pal, “Preprocessing for image classification by convolutional neural networks,” 2016 IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology, RTEICT 2016 - Proceedings, pp. 1778–1781, Jan. 2017, doi: 10.1109/RTEICT.2016.7808140.
B. Dwyer, J. Nelson, and T. Hanset, “Roboflow.” Accessed: Apr. 08, 2024. [Online]. Available: https://roboflow.com/research#cite
J. Terven, D. M. Córdova-Esparza, and J. A. Romero-González, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction 2023, Vol. 5, Pages 1680-1716, vol. 5, no. 4, pp. 1680–1716, Nov. 2023, doi: 10.3390/MAKE5040083.
K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition”, doi: https://doi.org/10.1007/978-3-319-10578-9_23.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, Jan. 2018, doi: 10.1109/CVPR.2018.00474.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, Dec. 2015, doi: 10.1109/CVPR.2016.90.
K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, Sep. 2014, doi: https://doi.org/10.48550/arXiv.1409.1556.
“SSD object detection: Single Shot MultiBox Detector for real-time processing | by Jonathan Hui | Medium.” Accessed: Aug. 22, 2024. [Online]. Available: https://jonathan-hui.medium.com/ssd-object-detection-single-shot-multibox-detector-for-real-time-processing-9bd8deac0e06
“Intersection over Union (IoU) for object detection - PyImageSearch.” Accessed: Aug. 22, 2024. [Online]. Available: https://pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
S. M. Beitzel, E. C. Jensen, and O. Frieder, “MAP,” Encyclopedia of Database Systems, pp. 1691–1692, 2009, doi: 10.1007/978-0-387-39940-9_492.
J. A. DeRose and M. Doppler, “Guidelines for Understanding Magnification in the Modern Digital Microscope Era,” Micros Today, vol. 26, no. 4, pp. 20–33, Jul. 2018, doi: 10.1017/S1551929518000688.
L. Liberti and C. Lavor, “Euclidean Distance Geometry,” 2017, doi: 10.1007/978-3-319-60792-4.
Energy Commission, Electricity Regulation 1994. Accessed: Jul. 29, 2024. [Online]. Available: https://www.st.gov.my/details/policies_details/7/2