Nullmax, an AI firm specializing in autonomous driving, hosted its 2024 tech conference to officially unveil the new generation of autonomous driving technology, Nullmax Intelligence (NI). This new technology features vision only, map-free, and end to end multimodal capabilities to advance automotive intelligence.
The NI System includes an innovative multimodal model and a brain-inspired safety model, endowing vehicles with sensory capabilities akin to seeing, hearing, and reading. It outputs visual results, scene descriptions, and driving behaviors. With this system, Nullmax aims to achieve full-scenario autonomous driving by 2025 and expand AI capabilities to fields such as passenger transportation, cargo delivery, and robotics, enabling interaction with the physical world through visual observation and cognitive thinking.
Advancing Intelligence and Accelerating Evolution
In recent years, automotive intelligence has rapidly developed, with autonomous driving application scenarios gradually expanding and advanced functions applied urban environments. However, challenges such as heavy reliance on rules-based programming, poor generalization, high costs, and rigid performance have limited the widespread adoption and scale of autonomous driving.
For instance, urban navigate on pilot often demonstrated cautions and rigid behavior, heavily depending on LiDAR and HD map information, which limits their applicability to specific regions or roads. Additionally, high-end functions are typically limited to luxury or premium vehicle models. Similarly, the range of application of unmanned driving applications remains limited, hindering their value expansion.
At the launch event, Nullmax introduced the new generation of autonomous driving technology, Nullmax Intelligence. This system addresses industry challenges in a smarter, more human-like manner. Beyond visual inputs, the NI System supports the integration of sound, text, and gesture information through end to end multimodal model inference. It also features a brain-inspired neural network for safety. To this end, the NI system capable of outputting visual perception results, scene descriptions, and driving behavior information.
The NI System’s architecture allows it to process various inputs like images, sounds, and texts similarly to human cognition while possessing biological instincts to react to environmental conditions. This results in higher levels of safety, intelligence, and flexibility.
Nullmax Intelligence integrates high-level research in static perception, dynamic perception, and temporal fusion, including works accepted by top computer vision conferences like CVPR 2024 and ECCV 2024. It deploys Yan1.2, the first non-Attention mechanism general-purpose large multimodel model in China on vehicles, and collaborates with YanSi Brain-inspired Research Institute to construct a brain-like neural network.
Pure Vision, Map-Free, and Multimodal Model
A key feature of the Nullmax Intelligence is its ability to support vision only, map-free, and multimodal solution for full-scenario autonomous driving applications. Without relying on LiDAR or stereo cameras, Nullmax can perform precise obstacle detection and 3D reconstruction using vision perception and generate real-time local maps for navigation, achieving true mapless operation without high-precision maps.
The large multimodal approach centers mainly build up on vision, and with other sensor inputs, capable of outputting various information such as static and dynamic perception, scene language descriptions, and driving behavior actions. This comprehensive capability offers exceptional generalization, supports full-scenario applications, and requires less computational power, with sparse computing power under 100T sufficient for full-scenario driving conditions.
Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!
The post Nullmax Launches ‘Nullmax Intelligence’ first appeared on AI-Tech Park.
Source link
lol