Researchers improved AI agent performance on unfamiliar tasks using ‘Dungeons and Dragons’

Researchers improved AI agent performance on unfamiliar tasks using ‘Dungeons and Dragons’

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Organizations interested in deploying AI agents must first fine-tune them, especially in workflows that often feel rote. While some organizations want agents that only perform one kind of task in one workflow, sometimes agents need to be brought into new environments with the hope that they adapt. 

Researchers from the Beijing University of Posts and Telecommunications have unveiled a new method, AgentRefine. It teaches agents to self-correct, leading to more generalized and adaptive AI agents. 

The researchers said that current tuning methods limit agents to the same tasks as their training dataset, or “held-in” tasks, and do not perform as well for “held-out,” or new environments. By following only the rules laid out through the training data, agents trained with these frameworks would have trouble “learning” from their mistakes and cannot be made into general agents and brought into to new workflows. 

To combat that limitation, AgentRefine aims to create more generalized agent training datasets that enable the model to learn from mistakes and fit into new workflows. In a new paper, the researchers said that AgentRefine’s goal is “to develop generalized agent-tuning data and establish the correlation between agent generalization and self-refinement.” If agents self-correct, they will not perpetuate any errors they learned and bring these same mistakes to other environments they’re deployed in. 

“We find that agent-tuning on the self-refinement data enhances the agent to explore more viable actions while meeting bad situations, thereby resulting in better generalization to new agent environments,” the researchers write. 

AI agent training inspired by D&D

Taking their cue from the tabletop roleplaying game Dungeons & Dragons, the researchers created personas, scripts for the agent to follow and challenges. And yes, there is a Dungeon Master (DM). 

They divided data construction for AgentRefine into three areas: script generation, trajectory generation and verification. 

In script generation, the model creates a script, or guide, with information on the environment, tasks and actions personas can take. (The researchers tested AgentRefine using Llama-3-8B-Instruct, Llama-3-70B-Instruct, Mistral-7B-Instruct-v0.3, GPT-4o-mini and GPT-4o)

The model then generates agent data that has errors and acts both as a DM and a player during the trajectory stage. It asses the actions it can take and then see if these contain errors. The last stage, verification, checks the script and trajectory, allowing for the potential of agents it trains to do self-correction.

Better and more diverse task abilities

The researchers found that agents trained using the AgentRefine method and dataset performed better on diverse tasks and adapted to new scenarios. These agents self-correct more to redirect their actions and decision-making to avoid errors, and become more robust in the process. 

In particular, AgentRefine improved the performance of all the models to work on held-out tasks. 

Enterprises must make agents more task-adaptable so that they don’t repeat only what they’ve learned so they can become better decision-makers. Orchestrating agents not only “direct traffic” for multiple agents but also determine whether agents have completed tasks based on user requests. 

OpenAI’s o3 offers “program synthesis” which could improve task adaptability. Other orchestration and training frameworks, like Magentic-One from Microsoft, sets actions for supervisor agents to learn when to move tasks to different agents. 



Source link lol
By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.