View a PDF of the paper titled Diffusion-based RGB-D Semantic Segmentation with Deformable Attention Transformer, by Minh Bui and Kostas Alexis
Abstract:Vision-based perception and reasoning is essential for scene understanding in any autonomous system. RGB and depth images are commonly used to capture both the semantic and geometric features of the environment. Developing methods to reliably interpret this data is critical for real-world applications, where noisy measurements are often unavoidable. In this work, we introduce a diffusion-based framework to address the RGB-D semantic segmentation problem. Additionally, we demonstrate that utilizing a Deformable Attention Transformer as the encoder to extract features from depth images effectively captures the characteristics of invalid regions in depth measurements. Our generative framework shows a greater capacity to model the underlying distribution of RGB-D images, achieving robust performance in challenging scenarios with significantly less training time compared to discriminative methods. Experimental results indicate that our approach achieves State-of-the-Art performance on both the NYUv2 and SUN-RGBD datasets in general and especially in the most challenging of their image data. Our project page will be available at this https URL
Submission history
From: Minh Quang Bui [view email]
[v1]
Mon, 23 Sep 2024 15:23:01 UTC (11,851 KB)
[v2]
Fri, 27 Sep 2024 13:32:18 UTC (11,849 KB)
Source link
lol