Estimating the Hallucination Rate of Generative AI

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Estimating the Hallucination Rate of Generative AI, by Andrew Jesson and Nicolas Beltran-Velez and Quentin Chu and Sweta Karlekar and Jannik Kossen and Yarin Gal and John P. Cunningham and David Blei

View PDF
HTML (experimental)

Abstract:This paper presents a method for estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and a prediction question and asked to generate a response. One interpretation of ICL assumes that the CGM computes the posterior predictive of an unknown Bayesian model, which implicitly defines a joint distribution over observable datasets and latent mechanisms. This joint distribution factorizes into two components: the model prior over mechanisms and the model likelihood of datasets given a mechanism. With this perspective, we define a hallucination as a generated response to the prediction question with low model likelihood given the mechanism. We develop a new method that takes an ICL problem and estimates the probability that a CGM will generate a hallucination. Our method only requires generating prediction questions and responses from the CGM and evaluating its response log probability. We empirically evaluate our method using large language models for synthetic regression and natural language ICL tasks.

Submission history

From: Andrew Jesson [view email]
[v1]
Tue, 11 Jun 2024 17:01:52 UTC (9,578 KB)
[v2]
Sun, 13 Oct 2024 21:23:58 UTC (9,966 KB)
[v3]
Thu, 31 Oct 2024 19:10:41 UTC (11,241 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.