Enhancing Land Management through U-Net Deep Learning: A Case Study on Climate-Related Land Degradation in Berembun Forest Reserve in Malaysia

Yee Jian Chew - Multimedia University, Jalan Ayer Keroh Lama, 75450, Melaka, Malaysia
Shih Yin Ooi - Multimedia University, Jalan Ayer Keroh Lama, 75450, Melaka, Malaysia
Sheriza Mohd-Razali - Universiti Putra Malaysia, Seri Kembangan 43400, Malaysia
Ying Han Pang - Multimedia University, Jalan Ayer Keroh Lama, 75450, Melaka, Malaysia
Zheng You Lim - Multimedia University, Jalan Ayer Keroh Lama, 75450, Melaka, Malaysia


Citation Format:



DOI: http://dx.doi.org/10.62527/joiv.8.4.2948

Abstract


In the face of accelerating climate change, effective management of land resources needs innovative technological approaches. This study, conducted in the Berembun Forest Reserve, Jelebu, Malaysia, leverages advancements in geospatial technology and machine learning to address the pressing issue of land degradation, focusing on forested areas vulnerable to landslides. Utilizing high-resolution Unmanned Aerial Vehicle (UAV) imagery, the U-Net convolutional neural network model is employed for the precise classification and early detection of landslide-induced land degradation. Through a systematic analysis of 15 high-quality UAV images of 5472 x 3647 pixels, segmented into 256 x 256-pixel patches, the U-Net model demonstrated remarkable accuracy, achieving a mean Intersection-over-Union (IoU) of 0.9466. These findings underscore the model's potential to significantly enhance land management practices by providing timely and cost-effective landslide detection. Adopting such deep learning techniques is a pivotal shift towards more sustainable and resilient land management strategies in the era of climate change. This research showcases the practical application of machine learning in environmental monitoring and paves the way for future innovations. Implications for further research include integrating additional spectral bands, addressing environmental variability, and expanding applications across diverse landscapes to improve environmental monitoring, conservation efforts, and resilience strategies. Developing automated frameworks for real-time data processing and model deployment could further revolutionize the field, enabling more responsive and efficient land management practices.

Keywords


Machine learning; Climate change; Geospatial monitoring; Land degradation; UAV imagery; U-Net Model; Malaysia

Full Text:

PDF

References


T.-J. Goh, L.-Y. Chong, S.-C. Chong, and P.-Y. Goh, “A Campus-based Chatbot System using Natural Language Processing and Neural Network,” J. Informatics Web Eng., vol. 3, no. 1, pp. 96–116, Feb. 2024, doi: 10.33093/jiwe.2024.3.1.7.

C. C. Chai, W. H. Khoh, Y. H. Pang, and H. Y. Yap, “A Lung Cancer Detection with Pre-Trained CNN Models,” J. Informatics Web Eng., vol. 3, no. 1, pp. 41–54, Feb. 2024, doi: 10.33093/jiwe.2024.3.1.3.

J. J. Ng, K. O. M. Goh, and C. Tee, “Traffic Impact Assessment System using Yolov5 and ByteTrack,” J. Informatics Web Eng., vol. 2, no. 2, pp. 168–188, Sep. 2023, doi: 10.33093/jiwe.2023.2.2.13.

J. O. Victor, X. Chew, K. W. Khaw, and M. H. Lee, “A Cost-Based Dual ConvNet-Attention Transfer Learning Model for ECG Heartbeat Classification,” J. Informatics Web Eng., vol. 2, no. 2, pp. 90–110, Sep. 2023, doi: 10.33093/jiwe.2023.2.2.7.

J.-R. Lee, K.-W. Ng, and Y.-J. Yoong, “Face and Facial Expressions Recognition System for Blind People Using ResNet50 Architecture and CNN,” J. Informatics Web Eng., vol. 2, no. 2, pp. 284–298, Sep. 2023, doi: 10.33093/jiwe.2023.2.2.20.

F. S. Tehrani, M. Calvello, Z. Liu, L. Zhang, and S. Lacasse, “Machine learning and landslide studies: recent advances and applications,” Nat. Hazards, vol. 114, no. 2, pp. 1197–1245, Nov. 2022, doi: 10.1007/s11069-022-05423-7.

F. Abid, “A Survey of Machine Learning Algorithms Based Forest Fires Prediction and Detection Systems,” Fire Technol., vol. 57, no. 2, pp. 559–590, Mar. 2021, doi: 10.1007/s10694-020-01056-z.

N. M. Prakash Andrea; Loew, Simon, “Mapping Landslides on EO Data: Performance of Deep Learning Models vs. Traditional Machine Learning Models,” MDPI AG, vol. 12, no. 3, pp. 346-NA, 2020, doi: 10.3390/rs12030346.

P. Jain, S. C. P. Coogan, S. G. Subramanian, M. Crowley, S. Taylor, and M. D. Flannigan, “A review of machine learning applications in wildfire science and management,” Environ. Rev., vol. 28, no. 4, pp. 478–505, Dec. 2020, doi: 10.1139/er-2020-0019.

J. L. Hodges and B. Y. Lattimer, “Wildland Fire Spread Modeling Using Convolutional Neural Networks,” Fire Technol., vol. 55, no. 6, pp. 2115–2142, Nov. 2019, doi: 10.1007/s10694-019-00846-4.

A. L. van Natijne Roderik; Bogaard, Thom, “Machine Learning: New Potential for Local and Regional Deep-Seated Landslide Nowcasting.,” Multidiscip. Digit. Publ. Inst., vol. 20, no. 5, pp. 1425-NA, 2020, doi: 10.3390/s20051425.

J. S. Estrada, A. Fuentes, P. Reszka, and F. Auat Cheein, “Machine learning assisted remote forestry health assessment: a comprehensive state of the art review,” Front. Plant Sci., vol. 14, Jun. 2023, doi: 10.3389/fpls.2023.1139232.

Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, pp. 27–48, Apr. 2016, doi: 10.1016/j.neucom.2015.09.116.

A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez, “A review on deep learning techniques applied to semantic segmentation,” arXiv Prepr. arXiv1704.06857, 2017, doi: 10.48550/arXiv.1704.06857.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28

N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031–82057, 2021, doi: 10.1109/ACCESS.2021.3086020.

Q. Xu, Z. Ma, N. HE, and W. Duan, “DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation,” Comput. Biol. Med., vol. 154, p. 106626, Mar. 2023, doi: 10.1016/j.compbiomed.2023.106626.

H. Zhang et al., “BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation,” Comput. Biol. Med., vol. 159, p. 106960, 2023.

J. Shao, S. Chen, J. Zhou, H. Zhu, Z. Wang, and M. Brown, “Application of U-Net and Optimized Clustering in Medical Image Segmentation: A Review,” Comput. Model. Eng. Sci., vol. 136, no. 3, pp. 2173–2219, 2023, doi: 10.32604/cmes.2023.025499.

A. Jansson, E. Humphrey, N. Montecchio, R. Bittner, A. Kumar, and T. Weyde, “Singing voice separation with deep u-net convolutional networks,” 2017, [Online]. Available: https://openaccess.city.ac.uk/id/eprint/19289/.

P. Shi, M. Duan, L. Yang, W. Feng, L. Ding, and L. Jiang, “An improved U-net image segmentation method and its application for metallic grain size statistics,” Materials (Basel)., vol. 15, no. 13, p. 4417, Jun. 2022, doi: 10.3390/ma15134417.

Z. Pan, J. Xu, Y. Guo, Y. Hu, and G. Wang, “Deep learning segmentation and classification for urban village using a worldview satellite image based on U-Net,” Remote Sens., vol. 12, no. 10, p. 1574, May 2020, doi: 10.3390/rs12101574.

W. Alsabhan, T. Alotaiby, and B. Dudin, “Detecting Buildings and Nonbuildings from Satellite Images Using U-Net,” Comput. Intell. Neurosci., vol. 2022, pp. 1–13, May 2022, doi: 10.1155/2022/4831223.

N. Chandra, S. Sawant, and H. Vaidya, “An efficient U-Net model for improved landslide detection from satellite images,” PFG – J. Photogramm. Remote Sens. Geoinf. Sci., vol. 91, no. 1, pp. 13–28, Mar. 2023, doi: 10.1007/s41064-023-00232-4.

S. Reder, J.-P. Mund, N. Albert, L. Waßermann, and L. Miranda, “Detection of windthrown tree stems on UAV-Orthomosaics using U-Net convolutional networks,” Remote Sens., vol. 14, no. 1, p. 75, Dec. 2021, doi: 10.3390/rs14010075.

T. L. Giang, K. B. Dang, Q. Toan Le, V. G. Nguyen, S. S. Tong, and V.-M. Pham, “U-Net convolutional networks for mining land cover classification based on high-resolution UAV imagery,” IEEE Access, vol. 8, pp. 186257–186273, 2020, doi: 10.1109/ACCESS.2020.3030112.

K. Zou, X. Chen, F. Zhang, H. Zhou, and C. Zhang, “A field weed density evaluation method based on UAV imaging and modified U-Net,” Remote Sens., vol. 13, no. 2, p. 310, Jan. 2021, doi: 10.3390/rs13020310.

H. Wang et al., “A novel landslide identification method for multi-scale and complex background region based on multi-model fusion: YOLO + U-Net,” Landslides, vol. 21, no. 4, pp. 901–917, Apr. 2024, doi: 10.1007/s10346-023-02184-7.

J. Vega and C. Hidalgo, “Evaluation of U-Net transfer learning model for semantic segmentation of landslides in the Colombian tropical mountain region,” MATEC Web Conf., vol. 396, p. 19002, May 2024, doi: 10.1051/matecconf/202439619002.

P. Liu, Y. Wei, Q. Wang, Y. Chen, and J. Xie, “Research on post-earthquake landslide extraction algorithm based on improved U-Net model,” Remote Sens., vol. 12, no. 5, p. 894, Mar. 2020, doi: 10.3390/rs12050894.

O. Ghorbanzadeh, Y. Xu, P. Ghamisi, M. Kopp, and D. Kreil, “Landslide4sense: Reference benchmark data and deep learning models for landslide detection,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–17, 2022, doi: 10.1109/TGRS.2022.3215209.

D. Pimentel et al., “Environmental and economic costs of soil erosion and conservation benefits,” Science, vol. 267, no. 5201, pp. 1117–1123, 1995, doi: 10.1126/science.267.5201.1117.

T. Glade, “Landslide occurrence as a response to land use change: a review of evidence from New Zealand,” Catena, vol. 51, no. 3–4, pp. 297–314, 2003, doi: 10.1016/S0341-8162(02)00170-4.

Bernama, “N. Sembilan’s driest district, Jelebu, struck by worst-ever floods,” theSun, Jelebu, Dec. 28, 2021. Available: https://thesun.my/local_news/n-sembilan-s-driest-district-jelebu-struck-by-worst-ever-floods-XE8701123

Bernama, “Jelebu hit by flash floods,” New Straits Times, Jelebu, Mar. 06, 2022. Available: https://www.nst.com.my/news/nation/2022/03/777362/jelebu-hit-flash-floods

DJI Enterprise, “DJI Phantom 4 RTK,” 2023. https://enterprise.dji.com/phantom-4-rtk/specs (accessed May 30, 2024).

DJI Enterprise, “Map Pilot,” 2023. https://enterprise.dji.com/ecosystem/map-pilot (accessed May 30, 2024).

M. Tkachenko, M. Malyuk, A. Holmanyuk, and N. Liubimov, “Label Studio: Data labeling software.” 2020, [Online]. Available: https://github.com/heartexlabs/label-studio.

W. Wu, “Patchify,” GitHub Repository, 2021. https://github.com/dovahcrow/patchify.py (accessed May 30, 2024).

P. Iakubovskii, “Segmentation Models,” GitHub Repository, 2019. https://github.com/qubvel/segmentation_models (accessed Jan. 10, 2024).

S. Bhattiprolu, “230_landcover_dataset_segmentation,” 2023. https://github.com/bnsreenu/python_for_microscopists (accessed Jul. 19, 2023).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.

T.-Y. Lin et al., “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer, 2014, pp. 740–755.

H. Abu Alhaija, S. K. Mustikovela, L. Mescheder, A. Geiger, and C. Rother, “Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes,” Int. J. Comput. Vis., vol. 126, no. 9, pp. 961–972, Sep. 2018, doi: 10.1007/s11263-018-1070-x.

L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler, “Motchallenge 2015: Towards a benchmark for multi-target tracking,” arXiv Prepr. arXiv1504.01942, 2015, doi: 10.48550/arXiv.1504.01942.

M. A. Rahman and Y. Wang, “Optimizing intersection-over-union in deep neural networks for image segmentation,” G. Bebis, R. Boyle, B. Parvin, D. Koracin, F. Porikli, S. Skaff, A. Entezari, J. Min, D. Iwai, A. Sadagic, C. Scheidegger, and T. Isenberg, Eds. Cham: Springer International Publishing, 2016, pp. 234–244. doi: 10.1007/978-3-319-50835-1_22