Implementation 2D Lidar and Camera for detection object and distance based on RoS

Agus Mulyanto - Faculty of Engineering and Computer Science, Universitas Teknokrat Indonesia, Lampung, Indonesia
Rohmat Indra Borman - Faculty of Engineering and Computer Science, Universitas Teknokrat Indonesia, Lampung, Indonesia
Purwono Prasetyawan - Faculty of Engineering and Computer Science, Universitas Teknokrat Indonesia, Lampung, Indonesia
A Sumarudin - Informatics Department, Politeknik Negeri Indramayu, Indramayu, Indonesia


Citation Format:



DOI: http://dx.doi.org/10.30630/joiv.4.4.466

Abstract


The advanced driver assistance systems (ADAS) are one of the issues to protecting people from vehicle collision. Collision warning system is a very important part of ADAS to protect people from the dangers of accidents caused by fatigue, drowsiness and other human errors. Multi-sensors has been widely used in ADAS for environment perception such as cameras, radar, and light detection and ranging (LiDAR). We propose the relative orientation and translation between the two sensors are things that must be considered in performing fusion. we discuss the real-time collision warning system using 2D LiDAR and Camera sensors for environment perception and estimate the distance (depth) and angle of obstacles. In this paper, we propose a fusion of two sensors that is camera and 2D LiDAR to get the distance and angle of an obstacle in front of the vehicle that implemented on Nvidia Jetson Nano using Robot Operating System (ROS). Hence, a calibration process between the camera and 2D LiDAR is required which will be presented in session III. After that, the integration and testing will be carried out using static and dynamic scenarios in the relevant environment. For fusion, we use the implementation of the conversion from degree to coordinate. Based on the experiment, we result obtained an average of 0.197 meters

Keywords


lidar, Camera, object detection, distance detection, RoS

Full Text:

PDF

References


G. A. Kumar, J. H. Lee, J. Hwang, J. Park, S. H. Youn, and S. Kwon, “LiDAR and camera fusion approach for object distance estimation in self-driving vehicles,†Symmetry (Basel)., vol. 12, no. 2, 2020, doi: 10.3390/sym12020324.

A. Rangesh and M. M. Trivedi, “No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles Using Cameras and LiDARs,†IEEE Trans. Intell. Veh., vol. 4, no. 4, pp. 588–599, 2019, doi: 10.1109/TIV.2019.2938110.

R. H. Rasshofer and K. Gresser, “Automotive radar and lidar systems for next generation driver assistance functions,†Adv. Radio Sci., vol. 3, pp. 205–209, 2005, doi: 10.5194/ars-3-205-2005.

Q. Li, L. Chen, M. Li, S. L. Shaw, and A. Nüchter, “A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios,†IEEE Trans. Veh. Technol., vol. 63, no. 2, pp. 540–555, 2014, doi: 10.1109/TVT.2013.2281199.

H. Y. Lin, J. M. Dai, L. T. Wu, and L. Q. Chen, “A vision-based driver assistance system with forward collision and overtaking detection,†Sensors (Switzerland), vol. 20, no. 18, pp. 1–19, 2020, doi: 10.3390/s20185139.

M. M. William et al., “Traffic Signs Detection and Recognition System using Deep Learning,†Proc. - 2019 IEEE 9th Int. Conf. Intell. Comput. Inf. Syst. ICICIS 2019, pp. 160–166, 2019, doi: 10.1109/ICICIS46948.2019.9014763.

A. Mulyanto, R. I. Borman, P. Prasetyawan, W. Jatmiko, and P. Mursanto, “Real-Time Human Detection and Tracking Using Two Sequential Frames for Advanced Driver Assistance System,†2019, doi: 10.1109/ICICoS48119.2019.8982396.

C. W. Lai, H. Y. Lin, and W. L. Tai, “Vision based ADAS for forward vehicle detection using convolutional neural networks and motion tracking,†VEHITS 2019 - Proc. 5th Int. Conf. Veh. Technol. Intell. Transp. Syst., pp. 297–304, 2019, doi: 10.5220/0007626902970304.

A. Ziebinski, R. Cupek, D. Grzechca, and L. Chruszczyk, “Review of advanced driver assistance systems (ADAS),†AIP Conf. Proc., vol. 1906, no. November, 2017, doi: 10.1063/1.5012394.

X. Zhao Pengpeng Sun Zhigang Xu Haigen Min Hongkai Yu, “Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications,†2020, [Online]. Available: https://scholarworks.utrgv.edu/cs_fac/22.

F. Zhang, D. Clarke, and A. Knoll, “Vehicle detection based on LiDAR and camera fusion,†2014 17th IEEE Int. Conf. Intell. Transp. Syst. ITSC 2014, pp. 1620–1625, 2014, doi: 10.1109/ITSC.2014.6957925.

P. Wei, L. Cagle, T. Reza, J. Ball, and J. Gafford, “LiDAR and camera detection fusion in a real-time industrial multi-sensor collision avoidance system,†Electron., vol. 7, no. 6, 2018, doi: 10.3390/electronics7060084.

S. Gatesichapakorn, J. Takamatsu, and M. Ruchanurucks, “ROS based Autonomous Mobile Robot Navigation using 2D LiDAR and RGB-D Camera,†2019 1st Int. Symp. Instrumentation, Control. Artif. Intell. Robot. ICA-SYMP 2019, pp. 151–154, 2019, doi: 10.1109/ICA-SYMP.2019.8645984.

L. Zhou and Z. Deng, “A new algorithm for the extrinsic calibration of a 2D LIDAR and a camera,†Meas. Sci. Technol., vol. 25, no. 6, 2014, doi: 10.1088/0957-0233/25/6/065107.

W. Dong and V. Isler, “A novel method for the extrinsic calibration of a 2D laser rangefinder and a camera,†IEEE Sens. J., vol. 18, no. 10, pp. 4200–4211, 2018, doi: 10.1109/JSEN.2018.2819082.

M. J. J. Paul Viola, “Robust Real-Time Face Detection,†Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004, doi: 10.1112/jlms/s2-30.3.419.

R. Rai, S. Shukla, and B. Singh, “Histograms of Oriented Gradients for Human Detection,†in IEEE Computer Vision and Pattern Recognition, 2005, 2005, vol. 1, no. CVPR 2005, pp. 886–893, doi: 10.1109/TII.2019.2950094.

P. F. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object detection with deformable part models,†in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,San Francisco, 2010, pp. 2241–2248, doi: 10.1109/CVPR.2010.5539906.

Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,†pp. 1–39, May 2019, [Online]. Available: http://arxiv.org/abs/1905.05055.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587, doi: 10.1109/CVPR.2014.81.

K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,†in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, vol. 37, no. 9, pp. 1904–1916, doi: 10.1007/978-3-319-10578-9_23.

R. Girshick, “Fast R-CNN,†Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.

T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,†in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 936–944, doi: 10.1109/CVPR.2017.106.

K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,†in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2980–2988, doi: 10.1109/ICCV.2017.322.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.

J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,†Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 6517–6525, 2017, doi: 10.1109/CVPR.2017.690.

J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,†arXiv Prepr. arXiv1804.02767, 2018.

A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,†2020, [Online]. Available: http://arxiv.org/abs/2004.10934.

W. Liu et al., “SSD: Single Shot MultiBox Detector,†in European Conference on Computer Vision – ECCV 2016, 2016, vol. 9905, pp. 21–37, doi: 10.1007/978-3-319-46448-0.

T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection,†Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 2999–3007, 2017, doi: 10.1109/ICCV.2017.324.

P. Soviany and R. T. Ionescu, “Optimizing the trade-off between single-stage and two-stage deep object detectors using image difficulty prediction,†Proc. - 2018 20th Int. Symp. Symb. Numer. Algorithms Sci. Comput. SYNASC 2018, pp. 209–214, 2018, doi: 10.1109/SYNASC.2018.00041.

“ROS.org | About ROS.†https://www.ros.org/about-ros/ (accessed Oct. 27, 2020).

M. Amy et al., “Towards Adaptive Fault Tolerance on ROS for Advanced Driver Assistance Systems To cite this version : HAL Id : hal-01707514 Towards Adaptive Fault Tolerance on ROS for Advanced Driver Assistance Systems,†2018.

J. Lussereau et al., “In-tegration of ADAS algorithm in a Vehicle Prototype,†IEEE Int. Work. Adv. Robot. its Soc. Impacts ARSO, 2015.

“SSD-Mobilenet-v2.†https://nvidia.box.com/shared/static/jcdewxep8vamzm71zajcovza938lygre.gz.

“Open Images Dataset V6.†https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection&c=%2Fm%2F0k4j (accessed Nov. 20, 2020).

“-.†https://raw.githubusercontent.com/dusty-nv/jetson-inference/dev/docs/images/pytorch-ssd-mobilenet.jpg.

“GitHub - robopeak/rplidar_ros.†https://github.com/robopeak/rplidar_ros (accessed Nov. 20, 2020).