An Analysis and Mitigation of the Reversal Curse

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled An Analysis and Mitigation of the Reversal Curse, by Ang Lv and Kaiyi Zhang and Shufang Xie and Quan Tu and Yuhan Chen and Ji-Rong Wen and Rui Yan

View PDF
HTML (experimental)

Abstract:Recent research observed a noteworthy phenomenon in large language models (LLMs), referred to as the “reversal curse.” The reversal curse is that when dealing with two entities, denoted as $a$ and $b$, connected by their relation $R$ and its inverse $R^{-1}$, LLMs excel in handling sequences in the form of “$aRb$,” but encounter challenges when processing “$bR^{-1}a$,” whether in generation or comprehension. For instance, GPT-4 can accurately respond to the query “Tom Cruise’s mother is?” with “Mary Lee Pfeiffer,” but it struggles to provide a satisfactory answer when asked “Mary Lee Pfeiffer’s son is?” In this paper, we undertake the first-ever study of how the reversal curse happens in LLMs. Our investigations reveal that the reversal curse can stem from the specific training objectives, which become particularly evident in the widespread use of next-token prediction within most causal language models. We hope this initial investigation can draw more attention to the reversal curse, as well as other underlying limitations in current LLMs.

Submission history

From: Ang Lv [view email]
[v1]
Mon, 13 Nov 2023 17:01:12 UTC (920 KB)
[v2]
Thu, 16 Nov 2023 08:35:05 UTC (865 KB)
[v3]
Sun, 10 Nov 2024 10:24:33 UTC (1,674 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.