A Benchmark for Long-Form Medical Question Answering

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish - PyImageSearch


View a PDF of the paper titled A Benchmark for Long-Form Medical Question Answering, by Pedram Hosseini and 6 other authors

View PDF
HTML (experimental)

Abstract:There is a lack of benchmarks for evaluating large language models (LLMs) in long-form medical question answering (QA). Most existing medical QA evaluation benchmarks focus on automatic metrics and multiple-choice questions. While valuable, these benchmarks fail to fully capture or assess the complexities of real-world clinical applications where LLMs are being deployed. Furthermore, existing studies on evaluating long-form answer generation in medical QA are primarily closed-source, lacking access to human medical expert annotations, which makes it difficult to reproduce results and enhance existing baselines. In this work, we introduce a new publicly available benchmark featuring real-world consumer medical questions with long-form answer evaluations annotated by medical doctors. We performed pairwise comparisons of responses from various open and closed-source medical and general-purpose LLMs based on criteria such as correctness, helpfulness, harmfulness, and bias. Additionally, we performed a comprehensive LLM-as-a-judge analysis to study the alignment between human judgments and LLMs. Our preliminary results highlight the strong potential of open LLMs in medical QA compared to leading closed models. Code & Data: this https URL

Submission history

From: Pedram Hosseini [view email]
[v1]
Thu, 14 Nov 2024 22:54:38 UTC (8,532 KB)
[v2]
Tue, 19 Nov 2024 21:04:38 UTC (6,705 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.