Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?, by Kristian Gonz’alez Barman and 2 other authors

View PDF

Abstract:We argue for the epistemic and ethical advantages of pluralism in Reinforcement Learning from Human Feedback (RLHF) in the context of Large Language Models (LLM). Drawing on social epistemology and pluralist philosophy of science, we suggest ways in which RHLF can be made more responsive to human needs and how we can address challenges along the way. The paper concludes with an agenda for change, i.e. concrete, actionable steps to improve LLM development.

Submission history

From: Kristian Gonzalez Barman [view email]
[v1]
Tue, 2 Jul 2024 08:07:27 UTC (417 KB)
[v2]
Fri, 17 Jan 2025 09:17:30 UTC (547 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.