Seed-Free Synthetic Data Generation Framework for Instruction-Tuning LLMs: A Case Study in Thai

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 23 Nov 2024]

View a PDF of the paper titled Seed-Free Synthetic Data Generation Framework for Instruction-Tuning LLMs: A Case Study in Thai, by Parinthapat Pengpun and 3 other authors

View PDF
HTML (experimental)

Abstract:We present a synthetic data approach for instruction-tuning large language models (LLMs) for low-resource languages in a data-efficient manner, specifically focusing on Thai. We identify three key properties that contribute to the effectiveness of instruction-tuning datasets: fluency, diversity, and cultural context. We propose a seed-data-free framework for generating synthetic instruction-tuning data that incorporates these essential properties. Our framework employs an LLM to generate diverse topics, retrieve relevant contexts from Wikipedia, and create instructions for various tasks, such as question answering, summarization, and conversation. The experimental results show that our best-performing synthetic dataset, which incorporates all three key properties, achieves competitive performance using only 5,000 instructions when compared to state-of-the-art Thai LLMs trained on hundreds of thousands of instructions. Our code and dataset are publicly available at this https URL.

Submission history

From: Peerat Limkonchotiwat [view email]
[v1]
Sat, 23 Nov 2024 07:50:59 UTC (8,438 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.