RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model, by Ahmad Sebaq and 1 other authors

View PDF
HTML (experimental)

Abstract:The generation and enhancement of satellite imagery are critical in remote sensing, requiring high-quality, detailed images for accurate analysis. This research introduces a two-stage diffusion model methodology for synthesizing high-resolution satellite images from textual prompts. The pipeline comprises a Low-Resolution Diffusion Model (LRDM) that generates initial images based on text inputs and a Super-Resolution Diffusion Model (SRDM) that refines these images into high-resolution outputs. The LRDM merges text and image embeddings within a shared latent space, capturing essential scene content and structure. The SRDM then enhances these images, focusing on spatial features and visual clarity. Experiments conducted using the Remote Sensing Image Captioning Dataset (RSICD) demonstrate that our method outperforms existing models, producing satellite images with accurate geographical details and improved spatial resolution.

Submission history

From: Ahmad Sebaq [view email]
[v1]
Sun, 3 Sep 2023 09:34:49 UTC (10,182 KB)
[v2]
Sat, 5 Oct 2024 08:42:15 UTC (10,600 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.