Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries, by Kiran Vodrahalli and 23 other authors

View PDF
HTML (experimental)

Abstract:We introduce Michelangelo: a minimal, synthetic, and unleaked long-context reasoning evaluation for large language models which is also easy to automatically score. This evaluation is derived via a novel, unifying framework for evaluations over arbitrarily long contexts which measure the model’s ability to do more than retrieve a single piece of information from its context. The central idea of the Latent Structure Queries framework (LSQ) is to construct tasks which require a model to “chisel away” the irrelevant information in the context, revealing a latent structure in the context. To verify a model’s understanding of this latent structure, we query the model for details of the structure. Using LSQ, we produce three diagnostic long-context evaluations across code and natural-language domains intended to provide a stronger signal of long-context language model capabilities. We perform evaluations on several state-of-the-art models and demonstrate both that a) the proposed evaluations are high-signal and b) that there is significant room for improvement in synthesizing long-context information.

Submission history

From: Kiran Vodrahalli [view email]
[v1]
Thu, 19 Sep 2024 10:38:01 UTC (296 KB)
[v2]
Fri, 20 Sep 2024 00:47:33 UTC (296 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.