Text-guided Controllable Mesh Refinement for Interactive 3D Modeling

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Text-guided Controllable Mesh Refinement for Interactive 3D Modeling, by Yun-Chun Chen and 5 other authors

View PDF
HTML (experimental)

Abstract:We propose a novel technique for adding geometric details to an input coarse 3D mesh guided by a text prompt. Our method is composed of three stages. First, we generate a single-view RGB image conditioned on the input coarse geometry and the input text prompt. This single-view image generation step allows the user to pre-visualize the result and offers stronger conditioning for subsequent multi-view generation. Second, we use our novel multi-view normal generation architecture to jointly generate six different views of the normal images. The joint view generation reduces inconsistencies and leads to sharper details. Third, we optimize our mesh with respect to all views and generate a fine, detailed geometry as output. The resulting method produces an output within seconds and offers explicit user control over the coarse structure, pose, and desired details of the resulting 3D mesh.

Submission history

From: Yun-Chun Chen [view email]
[v1]
Mon, 3 Jun 2024 17:59:43 UTC (6,327 KB)
[v2]
Wed, 11 Sep 2024 00:42:37 UTC (6,227 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.