LLM Reading Tea Leaves: Automatically Evaluating Topic Models with Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled LLM Reading Tea Leaves: Automatically Evaluating Topic Models with Large Language Models, by Xiaohao Yang and 4 other authors

View PDF
HTML (experimental)

Abstract:Topic modeling has been a widely used tool for unsupervised text analysis. However, comprehensive evaluations of a topic model remain challenging. Existing evaluation methods are either less comparable across different models (e.g., perplexity) or focus on only one specific aspect of a model (e.g., topic quality or document representation quality) at a time, which is insufficient to reflect the overall model performance. In this paper, we propose WALM (Word Agreement with Language Model), a new evaluation method for topic modeling that considers the semantic quality of document representations and topics in a joint manner, leveraging the power of Large Language Models (LLMs). With extensive experiments involving different types of topic models, WALM is shown to align with human judgment and can serve as a complementary evaluation method to the existing ones, bringing a new perspective to topic modeling. Our software package is available at this https URL.

Submission history

From: Xiaohao Yang [view email]
[v1]
Thu, 13 Jun 2024 11:19:50 UTC (15,126 KB)
[v2]
Tue, 14 Jan 2025 01:21:55 UTC (8,934 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.