View a PDF of the paper titled Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis, by Salaheldin Mohamed and 2 other authors
Abstract:Text-to-image (T2I) models have significantly advanced the development of artificial intelligence, enabling the generation of high-quality images in diverse contexts based on specific text prompts. However, existing T2I-based methods often struggle to accurately reproduce the appearance of individuals from a reference image and to create novel representations of those individuals in various settings. To address this, we leverage the pre-trained UNet from Stable Diffusion to incorporate the target face image directly into the generation process. Our approach diverges from prior methods that depend on fixed encoders or static face embeddings, which often fail to bridge encoding gaps. Instead, we capitalize on UNet’s sophisticated encoding capabilities to process reference images across multiple scales. By innovatively altering the cross-attention layers of the UNet, we effectively fuse individual identities into the generative process. This strategic integration of facial features across various scales not only enhances the robustness and consistency of the generated images but also facilitates efficient multi-reference and multi-identity generation. Our method sets a new benchmark in identity-preserving image generation, delivering state-of-the-art results in similarity metrics while maintaining prompt alignment.
Submission history
From: Salaheldin Mohamed [view email]
[v1]
Fri, 27 Sep 2024 19:31:04 UTC (4,589 KB)
[v2]
Wed, 2 Oct 2024 07:56:31 UTC (13,359 KB)
Source link
lol