View a PDF of the paper titled Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning, by Trilok Padhi and 4 other authors
Abstract:The digital landscape continually evolves with multimodality, enriching the online experience for users. Creators and marketers aim to weave subtle contextual cues from various modalities into congruent content to engage users with a harmonious message. This interplay of multimodal cues is often a crucial factor in attracting users’ attention. However, this richness of multimodality presents a challenge to computational modeling, as the semantic contextual cues spanning across modalities need to be unified to capture the true holistic meaning of the multimodal content. This contextual meaning is critical in attracting user engagement as it conveys the intended message of the brand or the organization. In this work, we incorporate external commonsense knowledge from knowledge graphs to enhance the representation of multimodal data using compact Visual Language Models (VLMs) and predict the success of multi-modal crowdfunding campaigns. Our results show that external knowledge commonsense bridges the semantic gap between text and image modalities, and the enhanced knowledge-infused representations improve the predictive performance of models for campaign success upon the baselines without knowledge. Our findings highlight the significance of contextual congruence in online multimodal content for engaging and successful crowdfunding campaigns.
Submission history
From: Ugur Kursuncu [view email]
[v1]
Tue, 6 Feb 2024 00:51:27 UTC (41,019 KB)
[v2]
Sun, 17 Nov 2024 21:40:50 UTC (30,168 KB)
Source link
lol