Steering Conversational Large Language Models for Long Emotional Support Conversations

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Steering Conversational Large Language Models for Long Emotional Support Conversations, by Navid Madani and 2 other authors

View PDF
HTML (experimental)

Abstract:In this study, we address the challenge of enabling large language models (LLMs) to consistently adhere to emotional support strategies in extended conversations. We focus on the steerability of the Llama-2 and Llama-3 suite of models, examining their ability to maintain these strategies throughout interactions. To assess this, we introduce the Strategy Relevant Attention (SRA) metric, which quantifies the model’s adherence to the prompted strategy through attention maps. To facilitate our study, we create a strategy-conditioned synthetic conversational dataset derived from the ESConv dataset. We also propose various baselines informed by our proposed SRA metric to address the challenge and propose a fine-tuned model that significantly enhances the steerability of the base model in following the strategy throughout the conversation. The code and data are publicly available on our GitHub.

Submission history

From: Navid Madani [view email]
[v1]
Fri, 16 Feb 2024 05:03:01 UTC (1,043 KB)
[v2]
Sun, 15 Sep 2024 15:58:45 UTC (2,323 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.