A Survey for Foundation Models in Autonomous Driving

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled A Survey for Foundation Models in Autonomous Driving, by Haoxiang Gao and Zhongruo Wang and Yaqian Li and Kaiwen Long and Ming Yang and Yiqing Shen

View PDF
HTML (experimental)

Abstract:The advent of foundation models has revolutionized the fields of natural language processing and computer vision, paving the way for their application in autonomous driving (AD). This survey presents a comprehensive review of more than 40 research papers, demonstrating the role of foundation models in enhancing AD. Large language models contribute to planning and simulation in AD, particularly through their proficiency in reasoning, code generation and translation. In parallel, vision foundation models are increasingly adapted for critical tasks such as 3D object detection and tracking, as well as creating realistic driving scenarios for simulation and testing. Multi-modal foundation models, integrating diverse inputs, exhibit exceptional visual understanding and spatial reasoning, crucial for end-to-end AD. This survey not only provides a structured taxonomy, categorizing foundation models based on their modalities and functionalities within the AD domain but also delves into the methods employed in current research. It identifies the gaps between existing foundation models and cutting-edge AD approaches, thereby charting future research directions and proposing a roadmap for bridging these gaps.

Submission history

From: Yiqing Shen [view email]
[v1]
Fri, 2 Feb 2024 02:44:59 UTC (771 KB)
[v2]
Wed, 21 Aug 2024 17:02:21 UTC (1,610 KB)
[v3]
Sat, 31 Aug 2024 02:28:20 UTC (735 KB)
[v4]
Thu, 5 Sep 2024 03:38:08 UTC (881 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.