View a PDF of the paper titled PDPP: Projected Diffusion for Procedure Planning in Instructional Videos, by Hanlin Wang and 3 other authors
Abstract:In this paper, we study the problem of procedure planning in instructional videos, which aims to make a plan (i.e. a sequence of actions) given the current visual observation and the desired goal. Previous works cast this as a sequence modeling problem and leverage either intermediate visual observations or language instructions as supervision to make autoregressive planning, resulting in complex learning schemes and expensive annotation costs. To avoid intermediate supervision annotation and error accumulation caused by planning autoregressively, we propose a diffusion-based framework, coined as PDPP, to directly model the whole action sequence distribution with task label as supervision instead. Our core idea is to treat procedure planning as a distribution fitting problem under the given observations, thus transform the planning problem to a sampling process from this distribution during inference. The diffusion-based modeling approach also effectively addresses the uncertainty issue in procedure planning. Based on PDPP, we further apply joint training to our framework to generate plans with varying horizon lengths using a single model and reduce the number of training parameters required. We instantiate our PDPP with three popular diffusion models and investigate a series of condition-introducing methods in our framework, including condition embeddings, MoEs, two-stage prediction and Classifier-Free Guidance strategy. Finally, we apply our PDPP to the Visual Planners for human Assistance problem which requires the goal specified in natural language rather than visual observation. We conduct experiments on challenging datasets of different scales and our PDPP model achieves the state-of-the-art performance on multiple metrics, even compared with those strongly-supervised counterparts. These results further demonstrates the effectiveness and generalization ability of our model.
Submission history
From: Hanlin Wang [view email]
[v1]
Sun, 26 Mar 2023 10:50:16 UTC (1,146 KB)
[v2]
Sun, 23 Jul 2023 09:41:51 UTC (1,146 KB)
[v3]
Wed, 22 Jan 2025 09:50:01 UTC (1,148 KB)
Source link
lol