Balancing Act: Prioritization Strategies for LLM-Designed Restless Bandit Rewards

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Balancing Act: Prioritization Strategies for LLM-Designed Restless Bandit Rewards, by Shresth Verma and 3 other authors

View PDF
HTML (experimental)

Abstract:LLMs are increasingly used to design reward functions based on human preferences in Reinforcement Learning (RL). We focus on LLM-designed rewards for Restless Multi-Armed Bandits, a framework for allocating limited resources among agents. In applications such as public health, this approach empowers grassroots health workers to tailor automated allocation decisions to community needs. In the presence of multiple agents, altering the reward function based on human preferences can impact subpopulations very differently, leading to complex tradeoffs and a multi-objective resource allocation problem. We are the first to present a principled method termed Social Choice Language Model for dealing with these tradeoffs for LLM-designed rewards for multiagent planners in general and restless bandits in particular. The novel part of our model is a transparent and configurable selection component, called an adjudicator, external to the LLM that controls complex tradeoffs via a user-selected social welfare function. Our experiments demonstrate that our model reliably selects more effective, aligned, and balanced reward functions compared to purely LLM-based approaches.

Submission history

From: Shresth Verma [view email]
[v1]
Thu, 22 Aug 2024 03:54:08 UTC (4,008 KB)
[v2]
Sun, 15 Sep 2024 07:16:38 UTC (4,008 KB)
[v3]
Thu, 16 Jan 2025 08:44:22 UTC (3,201 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.