RDRec: Rationale Distillation for LLM-based Recommendation

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled RDRec: Rationale Distillation for LLM-based Recommendation, by Xinfeng Wang and 3 other authors

View PDF

Abstract:Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning capability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our source code is released at this https URL.

Submission history

From: Xinfeng Wang [view email]
[v1]
Fri, 17 May 2024 07:22:02 UTC (7,671 KB)
[v2]
Fri, 14 Jun 2024 05:07:32 UTC (7,668 KB)
[v3]
Wed, 8 Jan 2025 11:21:12 UTC (7,671 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.