Depth $F_1$: Improving Evaluation of Cross-Domain Text Classification by Measuring Semantic Generalizability

Big Tech Is Giving Campaigns Both the Venom and the Antidote for GenAI


[Submitted on 20 Jun 2024]

View a PDF of the paper titled Depth $F_1$: Improving Evaluation of Cross-Domain Text Classification by Measuring Semantic Generalizability, by Parker Seegmiller and 2 other authors

View PDF
HTML (experimental)

Abstract:Recent evaluations of cross-domain text classification models aim to measure the ability of a model to obtain domain-invariant performance in a target domain given labeled samples in a source domain. The primary strategy for this evaluation relies on assumed differences between source domain samples and target domain samples in benchmark datasets. This evaluation strategy fails to account for the similarity between source and target domains, and may mask when models fail to transfer learning to specific target samples which are highly dissimilar from the source domain. We introduce Depth $F_1$, a novel cross-domain text classification performance metric. Designed to be complementary to existing classification metrics such as $F_1$, Depth $F_1$ measures how well a model performs on target samples which are dissimilar from the source domain. We motivate this metric using standard cross-domain text classification datasets and benchmark several recent cross-domain text classification models, with the goal of enabling in-depth evaluation of the semantic generalizability of cross-domain text classification models.

Submission history

From: Parker Seegmiller [view email]
[v1]
Thu, 20 Jun 2024 19:35:17 UTC (2,036 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.