Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines, by Michael Toker and 4 other authors

View PDF
HTML (experimental)

Abstract:Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process. However, the process by which the encoder produces the text representation is unknown. We propose the Diffusion Lens, a method for analyzing the text encoder of T2I models by generating images from its intermediate representations. Using the Diffusion Lens, we perform an extensive analysis of two recent T2I models. Exploring compound prompts, we find that complex scenes describing multiple objects are composed progressively and more slowly compared to simple scenes; Exploring knowledge retrieval, we find that representation of uncommon concepts requires further computation compared to common concepts, and that knowledge retrieval is gradual across layers. Overall, our findings provide valuable insights into the text encoder component in T2I pipelines.

Submission history

From: Michael Toker [view email]
[v1]
Sat, 9 Mar 2024 09:11:49 UTC (21,951 KB)
[v2]
Mon, 21 Oct 2024 09:38:03 UTC (33,620 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.