View a PDF of the paper titled AgentForge: A Flexible Low-Code Platform for Reinforcement Learning Agent Design, by Francisco Erivaldo Fernandes Junior and 1 other authors
Abstract:Developing a reinforcement learning (RL) agent often involves identifying values for numerous parameters, covering the policy, reward function, environment, and agent-internal architecture. Since these parameters are interrelated in complex ways, optimizing them is a black-box problem that proves especially challenging for nonexperts. Although existing optimization-as-a-service platforms (e.g., Vizier and Optuna) can handle such problems, they are impractical for RL systems, since the need for manual user mapping of each parameter to distinct components makes the effort cumbersome. It also requires understanding of the optimization process, limiting the systems’ application beyond the machine learning field and restricting access in areas such as cognitive science, which models human decision-making. To tackle these challenges, the paper presents AgentForge, a flexible low-code platform to optimize any parameter set across an RL system. Available at this https URL, it allows an optimization problem to be defined in a few lines of code and handed to any of the interfaced optimizers. With AgentForge, the user can optimize the parameters either individually or jointly. The paper presents an evaluation of its performance for a challenging vision-based RL problem.
Submission history
From: Francisco Erivaldo Fernandes Junior [view email]
[v1]
Fri, 25 Oct 2024 12:53:33 UTC (5,021 KB)
[v2]
Mon, 6 Jan 2025 07:32:59 UTC (1,163 KB)
[v3]
Thu, 9 Jan 2025 15:12:04 UTC (1,163 KB)
Source link
lol