SOEDiff: Efficient Distillation for Small Object Editing

Every’s Master Plan


View a PDF of the paper titled SOEDiff: Efficient Distillation for Small Object Editing, by Yiming Wu and 5 other authors

View PDF
HTML (experimental)

Abstract:In this paper, we delve into a new task known as small object editing (SOE), which focuses on text-based image inpainting within a constrained, small-sized area. Despite the remarkable success have been achieved by current image inpainting approaches, their application to the SOE task generally results in failure cases such as Object Missing, Text-Image Mismatch, and Distortion. These failures stem from the limited use of small-sized objects in training datasets and the downsampling operations employed by U-Net models, which hinders accurate generation. To overcome these challenges, we introduce a novel training-based approach, SOEDiff, aimed at enhancing the capability of baseline models like StableDiffusion in editing small-sized objects while minimizing training costs. Specifically, our method involves two key components: SO-LoRA, which efficiently fine-tunes low-rank matrices, and Cross-Scale Score Distillation loss, which leverages high-resolution predictions from the pre-trained teacher diffusion model. Our method presents significant improvements on the test dataset collected from MSCOCO and OpenImage, validating the effectiveness of our proposed method in small object editing. In particular, when comparing SOEDiff with SD-I model on the OpenImage-f dataset, we observe a 0.99 improvement in CLIP-Score and a reduction of 2.87 in FID.

Submission history

From: Yiming Wu [view email]
[v1]
Wed, 15 May 2024 06:14:31 UTC (19,995 KB)
[v2]
Thu, 25 Jul 2024 21:30:41 UTC (34,221 KB)
[v3]
Tue, 31 Dec 2024 09:33:28 UTC (48,804 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.