TAAT: Think and Act from Arbitrary Texts in Text2Motion

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled TAAT: Think and Act from Arbitrary Texts in Text2Motion, by Runqi Wang and Caoyuan Ma and Guopeng Li and Zheng Wang

View PDF
HTML (experimental)

Abstract:Text to Motion aims to generate human motions from texts. Existing settings assume that texts include action labels, which limits flexibility in practical scenarios. This paper extends this task with a more realistic assumption that the texts are arbitrary. Specifically, in our setting, arbitrary texts include existing action texts composed of action labels and introduce scene texts without explicit action labels. To address this practical issue, we extend the action texts in the HUMANML3D dataset by incorporating additional scene texts, thereby creating a new dataset, HUMANML3D++. Concurrently, we propose a simple framework that extracts action representations from arbitrary texts using a Large Language Model (LLM) and subsequently generates motions. Furthermore, we enhance the existing evaluation methodologies to address their inadequacies. Extensive experiments are conducted under different application scenarios to validate the effectiveness of the proposed framework on existing and proposed datasets. The results indicate that Text to Motion in this realistic setting is very challenging, fostering new research in this practical direction. Our dataset and code will be released.

Submission history

From: Runqi Wang [view email]
[v1]
Tue, 23 Apr 2024 04:54:32 UTC (22,592 KB)
[v2]
Thu, 6 Jun 2024 07:46:24 UTC (22,592 KB)
[v3]
Tue, 27 Aug 2024 13:36:12 UTC (32,978 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.