View a PDF of the paper titled Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?, by Kristian Gonz’alez Barman and 2 other authors
Abstract:We argue for the epistemic and ethical advantages of pluralism in Reinforcement Learning from Human Feedback (RLHF) in the context of Large Language Models (LLM). Drawing on social epistemology and pluralist philosophy of science, we suggest ways in which RHLF can be made more responsive to human needs and how we can address challenges along the way. The paper concludes with an agenda for change, i.e. concrete, actionable steps to improve LLM development.
Submission history
From: Kristian Gonzalez Barman [view email]
[v1]
Tue, 2 Jul 2024 08:07:27 UTC (417 KB)
[v2]
Fri, 17 Jan 2025 09:17:30 UTC (547 KB)
Source link
lol