JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning, by Chang Gao and 3 other authors

View PDF
HTML (experimental)

Abstract:Instruction tuning is vital for enhancing the performance of large language models (LLMs), but existing text-to-text methods, referred to as TextTuning, struggle with issues such as generalization, robustness, and controllability due to their lack of explicit task structures. We introduce JsonTuning, a structure-to-structure approach that uses JSON structures to represent tasks. This method improves generalization by clarifying task elements and their relations, boosts robustness by minimizing ambiguity, and enhances controllability by allowing precise control over outputs. We conduct an extensive comparative analysis between JsonTuning and TextTuning using various language models and benchmarks. Our findings reveal that JsonTuning consistently surpasses TextTuning in terms of performance, robustness, and controllability across different scenarios. By overcoming the limitations of TextTuning, JsonTuning demonstrates significant potential for developing more effective and reliable LLMs capable of handling diverse scenarios.

Submission history

From: Chang Gao [view email]
[v1]
Wed, 4 Oct 2023 16:44:23 UTC (113 KB)
[v2]
Mon, 19 Feb 2024 13:13:28 UTC (117 KB)
[v3]
Fri, 24 May 2024 13:44:12 UTC (93 KB)
[v4]
Tue, 14 Jan 2025 12:55:27 UTC (109 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.