Helm.ai, a leading provider of AI software for high-end ADAS, Level 4 autonomous driving, and robotics, today announced the launch of a multi-sensor generative AI foundation model for simulating the entire autonomous vehicle stack. WorldGen-1 synthesizes highly realistic sensor and perception data across multiple modalities and perspectives simultaneously, extrapolates sensor data from one modality to another, and predicts the behavior of the ego-vehicle and other agents in the driving environment. These AI-based simulation capabilities streamline the development and validation of autonomous driving systems.
Leveraging innovation in generative DNN architectures and Deep Teaching, a highly efficient unsupervised training technology, WorldGen-1 is trained on thousands of hours of diverse driving data, covering every layer of the autonomous driving stack including vision, perception, lidar, and odometry.
WorldGen-1 simultaneously generates highly realistic sensor data for surround-view cameras, semantic segmentation at the perception layer, lidar front-view, lidar bird’s-eye-view, and the ego-vehicle path in physical coordinates. By generating sensor, perception, and path data consistently across the entire AV stack, WorldGen-1 accurately replicates potential real-world situations from the perspective of the self-driving vehicle. This comprehensive sensor simulation capability enables the generation of high-fidelity multi-sensor labeled data to resolve and validate a myriad of challenging corner cases.
Furthermore, WorldGen-1 can extrapolate from real camera data to multiple other modalities, including semantic segmentation, lidar front-view, lidar bird’s-eye-view, and the path of the ego vehicle. This capability allows for the augmentation of existing camera-only datasets into synthetic multi-sensor datasets, increasing the richness of camera-only datasets and reducing data collection costs.
Beyond sensor simulation and extrapolation, WorldGen-1 can predict, based on an observed input sequence, the behaviors of pedestrians, vehicles, and the ego-vehicle in relation to the surrounding environment, generating realistic temporal sequences up to minutes in length. This enables AI-generation of a wide range of potential scenarios, including rare corner cases. WorldGen-1 can model multiple potential outcomes based on observed input data, demonstrating its ability for advanced multi-agent planning and prediction. WorldGen-1’s understanding of the driving environment and its predictive capability make it a valuable tool for intent prediction and path planning, both as a means of development and validation, as well as the core technology that makes real-time driving decisions.
“Combining innovation in generative AI architectures with our Deep Teaching technology yields a highly scalable and capital-efficient form of generative AI. With WorldGen-1, we’re taking another step towards closing the sim-to-real gap for autonomous driving, which is the key to streamlining and unifying the development and validation of high-end ADAS and L4 systems. We’re providing automakers with a tool to accelerate development, improve safety, and dramatically reduce the gap between simulation and real-world testing,” said Helm.ai’s CEO and Co-Founder, Vladislav Voroninski.
“Generating data from WorldGen-1 is like creating a vast collection of diverse digital siblings of real-world driving environments at the level of richness of the full AV sensor stack, replete with smart agents that think and predict like humans, enabling us to tackle the most complex challenges in autonomous driving,” added Voroninski.
The post Helm.ai Introduces WorldGen-1 first appeared on AI-Tech Park.
Source link
lol