Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Simegnew Alaba

and 2 more

Autonomous driving requires accurate, robust, and fast decision-making perception systems to understand the driving environment. Object detection is critical in allowing the perception system to understand the environment. The perception systems, especially 2D object detection and classification, have succeeded because of the emergence of deep learning (DL) in computer vision (CV) applications. However, 2D object detection lacks depth information, which is crucial to understanding the driving environment. Therefore, 3D object detection is fundamental for the perception system of autonomous driving and robotics applications to estimate the objects’ location and understand the driving environment. The CV community has been giving much attention recently to 3D object detection because of the growth of DL models and the need to know accurate locations of objects. However, 3D object detection is still challenging because of scale changes, the lack of 3D sensor information, and occlusions. Researchers have been using multiple sensors to solve these problems and further enhance the performance of the perception system. This survey presents the multisensor (camera, radar, and LiDAR) fusion-based 3D object detection methods. The fully autonomous vehicles need to be equipped with multiple sensors for robust and reliable driving. Camera, LiDAR, and radar sensors and their corresponding advantages and disadvantages are also presented. Then, relevant datasets are summarized, and state-of-the-art multisensor fusion-based methods are reviewed. Finally, challenges, open issues, and possible research directions are presented.