View a PDF of the paper titled ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting, by Shaofei Cai and 6 other authors
Abstract:Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common solution is building hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language. However, language suffers from the inability to communicate detailed spatial information. We propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from past observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, supported by real-time object tracking from SAM-2. Our method unlocks the potential of VLMs, enabling them to tackle complex tasks that demand spatial reasoning. Experiments in Minecraft show that our approach enables agents to achieve previously unattainable tasks, with a $mathbf{76}%$ absolute improvement in open-world interaction performance. Codes and demos are now available on the project page: this https URL.
Submission history
From: Shaofei Cai [view email]
[v1]
Wed, 23 Oct 2024 13:26:59 UTC (26,563 KB)
[v2]
Thu, 14 Nov 2024 12:29:41 UTC (26,510 KB)
Source link
lol