View a PDF of the paper titled SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation, by Hao Dong and 6 other authors
Abstract:High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30 m, and also predicting long-range HD maps up to 90 m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We use LiDAR depth to improve image depth estimation and use image features to guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins on all intervals. Additionally, we apply the generated HD map to a downstream path planning task, demonstrating that the long-range HD maps predicted by our method can lead to better path planning for autonomous vehicles. Our code has been released at this https URL.
Submission history
From: Hao Dong [view email]
[v1]
Mon, 28 Nov 2022 18:59:02 UTC (9,138 KB)
[v2]
Thu, 16 Mar 2023 16:01:22 UTC (11,528 KB)
[v3]
Thu, 31 Oct 2024 15:01:41 UTC (21,364 KB)
Source link
lol