GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration, by Naoki Wake and 4 other authors

View PDF

Abstract:We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos. Objects are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method’s efficacy in enabling real robots to operate from one-shot human demonstrations. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4V, highlighting the importance of incorporating human supervision within the pipeline. The prompts of GPT-4V/GPT-4 are available at this project page: this https URL

Submission history

From: Naoki Wake [view email]
[v1]
Mon, 20 Nov 2023 18:54:39 UTC (14,945 KB)
[v2]
Mon, 6 May 2024 10:18:21 UTC (1,346 KB)
[v3]
Mon, 19 Aug 2024 01:59:54 UTC (1,630 KB)
[v4]
Thu, 26 Sep 2024 18:35:52 UTC (1,934 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.