View a PDF of the paper titled Moving Object Segmentation: All You Need Is SAM (and Flow), by Junyu Xie and 3 other authors
Abstract:The objective of this paper is motion segmentation — discovering and segmenting the moving objects in a video. This is a much studied area with numerous careful, and sometimes complex, approaches and training schemes including: self-supervised learning, learning from synthetic datasets, object-centric representations, amodal representations, and many more. Our interest in this paper is to determine if the Segment Anything model (SAM) can contribute to this task. We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects. In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt. These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks. We also extend these frame-level segmentations to sequence-level segmentations that maintain object identity. Again, this simple model achieves outstanding performance across multiple moving object segmentation benchmarks.
Submission history
From: Junyu Xie [view email]
[v1]
Thu, 18 Apr 2024 17:59:53 UTC (38,648 KB)
[v2]
Thu, 21 Nov 2024 20:28:33 UTC (26,926 KB)
Source link
lol