View a PDF of the paper titled MAGIC: Generating Self-Correction Guideline for In-Context Text-to-SQL, by Arian Askari and 2 other authors
Abstract:Self-correction in text-to-SQL is the process of prompting large language model (LLM) to revise its previously incorrectly generated SQL, and commonly relies on manually crafted self-correction guidelines by human experts that are not only labor-intensive to produce but also limited by the human ability in identifying all potential error patterns in LLM responses. We introduce MAGIC, a novel multi-agent method that automates the creation of the self-correction guideline. MAGIC uses three specialized agents: a manager, a correction, and a feedback agent. These agents collaborate on the failures of an LLM-based method on the training set to iteratively generate and refine a self-correction guideline tailored to LLM mistakes, mirroring human processes but without human involvement. Our extensive experiments show that MAGIC’s guideline outperforms expert human’s created ones. We empirically find out that the guideline produced by MAGIC enhances the interpretability of the corrections made, providing insights in analyzing the reason behind the failures and successes of LLMs in self-correction. All agent interactions are publicly available at this https URL.
Submission history
From: Arian Askari [view email]
[v1]
Tue, 18 Jun 2024 15:06:06 UTC (666 KB)
[v2]
Mon, 16 Dec 2024 13:52:51 UTC (666 KB)
[v3]
Sat, 21 Dec 2024 16:25:28 UTC (672 KB)
Source link
lol