View a PDF of the paper titled AMBROSIA: A Benchmark for Parsing Ambiguous Questions into Database Queries, by Irina Saparina and Mirella Lapata
Abstract:Practical semantic parsers are expected to understand user utterances and map them to executable programs, even when these are ambiguous. We introduce a new benchmark, AMBROSIA, which we hope will inform and inspire the development of text-to-SQL parsers capable of recognizing and interpreting ambiguous requests. Our dataset contains questions showcasing three different types of ambiguity (scope ambiguity, attachment ambiguity, and vagueness), their interpretations, and corresponding SQL queries. In each case, the ambiguity persists even when the database context is provided. This is achieved through a novel approach that involves controlled generation of databases from scratch. We benchmark various LLMs on AMBROSIA, revealing that even the most advanced models struggle to identify and interpret ambiguity in questions.
Submission history
From: Irina Saparina [view email]
[v1]
Thu, 27 Jun 2024 10:43:04 UTC (9,310 KB)
[v2]
Thu, 31 Oct 2024 13:59:05 UTC (1,281 KB)
Source link
lol