Unity by Diversity: Improved Representation Learning in Multimodal VAEs

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Unity by Diversity: Improved Representation Learning in Multimodal VAEs, by Thomas M. Sutter and 7 other authors

View PDF
HTML (experimental)

Abstract:Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality’s latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.

Submission history

From: Thomas M. Sutter [view email]
[v1]
Fri, 8 Mar 2024 13:29:46 UTC (11,252 KB)
[v2]
Thu, 30 May 2024 11:55:49 UTC (20,876 KB)
[v3]
Fri, 31 May 2024 15:14:43 UTC (20,879 KB)
[v4]
Fri, 1 Nov 2024 10:19:01 UTC (20,880 KB)
[v5]
Tue, 7 Jan 2025 17:42:16 UTC (24,242 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.