sDPO: Don’t Use Your Data All at Once

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled sDPO: Don’t Use Your Data All at Once, by Dahyun Kim and 6 other authors

View PDF
HTML (experimental)

Abstract:As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference datasets and utilizing them in a stepwise manner, rather than employing it all at once. We demonstrate that this method facilitates the use of more precisely aligned reference models within the DPO training framework. Furthermore, sDPO trains the final model to be more performant, even outperforming other popular LLMs with more parameters.

Submission history

From: Chanjun Park [view email]
[v1]
Thu, 28 Mar 2024 09:56:04 UTC (1,513 KB)
[v2]
Mon, 7 Oct 2024 04:21:15 UTC (8,117 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.