View a PDF of the paper titled TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation, by Hritik Bansal and 5 other authors
Abstract:Most of these text-to-video (T2V) generative models often produce single-scene video clips that depict an entity performing a particular action (e.g., ‘a red panda climbing a tree’). However, it is pertinent to generate multi-scene videos since they are ubiquitous in the real-world (e.g., ‘a red panda climbing a tree’ followed by ‘the red panda sleeps on the top of the tree’). To generate multi-scene videos from the pretrained T2V model, we introduce a simple and effective Time-Aligned Captions (TALC) framework. Specifically, we enhance the text-conditioning mechanism in the T2V architecture to recognize the temporal alignment between the video scenes and scene descriptions. For instance, we condition the visual features of the earlier and later scenes of the generated video with the representations of the first scene description (e.g., ‘a red panda climbing a tree’) and second scene description (e.g., ‘the red panda sleeps on the top of the tree’), respectively. As a result, we show that the T2V model can generate multi-scene videos that adhere to the multi-scene text descriptions and be visually consistent (e.g., entity and background). Further, we finetune the pretrained T2V model with multi-scene video-text data using the TALC framework. We show that the TALC-finetuned model outperforms the baseline by achieving a relative gain of 29% in the overall score, which averages visual consistency and text adherence using human evaluation.
Submission history
From: Hritik Bansal [view email]
[v1]
Tue, 7 May 2024 21:52:39 UTC (1,889 KB)
[v2]
Wed, 15 May 2024 21:44:31 UTC (1,889 KB)
[v3]
Sat, 25 May 2024 01:13:26 UTC (1,849 KB)
[v4]
Fri, 8 Nov 2024 05:45:45 UTC (8,012 KB)
Source link
lol