A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios, by Samuel Ackerman and 3 other authors

View PDF
HTML (experimental)

Abstract:We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model’s answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.

Submission history

From: Samuel Ackerman [view email]
[v1]
Sun, 4 Aug 2024 08:43:09 UTC (1,179 KB)
[v2]
Sun, 6 Oct 2024 08:58:57 UTC (1,180 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.