View a PDF of the paper titled Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models, by Gabriel Y. Arteaga and 1 other authors
Abstract:Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
Submission history
From: Gabriel Yanci Arteaga [view email]
[v1]
Wed, 4 Sep 2024 13:59:38 UTC (2,995 KB)
[v2]
Fri, 6 Dec 2024 12:39:00 UTC (2,997 KB)
Source link
lol