View a PDF of the paper titled MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos, by Qingming Liu and 6 other authors
Abstract:In this paper, we propose MoDGS, a new pipeline to render novel views of dy namic scenes from a casually captured monocular video. Previous monocular dynamic NeRF or Gaussian Splatting methods strongly rely on the rapid move ment of input cameras to construct multiview consistency but struggle to recon struct dynamic scenes on casually captured input videos whose cameras are either static or move slowly. To address this challenging task, MoDGS adopts recent single-view depth estimation methods to guide the learning of the dynamic scene. Then, a novel 3D-aware initialization method is proposed to learn a reasonable deformation field and a new robust depth loss is proposed to guide the learning of dynamic scene geometry. Comprehensive experiments demonstrate that MoDGS is able to render high-quality novel view images of dynamic scenes from just a casually captured monocular video, which outperforms state-of-the-art meth ods by a significant margin. The code will be publicly available.
Submission history
From: Qingming Liu [view email]
[v1]
Sat, 1 Jun 2024 13:20:46 UTC (8,720 KB)
[v2]
Tue, 29 Oct 2024 09:50:00 UTC (18,678 KB)
Source link
lol