Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models, by Shirley Anugrah Hayati and 6 other authors

View PDF
HTML (experimental)

Abstract:Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model’s generalization to different tasks, even for unseen tasks. However, most existing instruction datasets include only single instructions, and they struggle to follow complex instructions composed of multiple subtasks. In this work, we propose a novel concept of compositional instructions called chain-of-instructions (CoI), where the output of one instruction becomes an input for the next like a chain. Unlike the conventional practice of solving single instruction tasks, our proposed method encourages a model to solve each subtask step by step until the final answer is reached. CoI-tuning (i.e., fine-tuning with CoI instructions) improves the model’s ability to handle instructions composed of multiple subtasks as well as unseen composite tasks such as multilingual summarization. Overall, our study find that simple CoI tuning of existing instruction data can provide consistent generalization to solve more complex, unseen, and longer chains of instructions.

Submission history

From: Shirley Anugrah Hayati [view email]
[v1]
Sun, 18 Feb 2024 10:10:40 UTC (3,092 KB)
[v2]
Mon, 24 Jun 2024 22:43:57 UTC (4,752 KB)
[v3]
Fri, 3 Jan 2025 22:50:35 UTC (7,476 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.