View a PDF of the paper titled SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking, by Zhuang Li and 5 other authors
Abstract:Recent studies have shown that maintaining a consistent response style by human experts and enhancing data quality in training sets can significantly improve the performance of fine-tuned Large Language Models (LLMs) while reducing the number of training examples needed. However, the precise definition of style and the relationship between style, data quality, and LLM performance remains unclear. This research identifies two key stylistic elements in responses: linguistic form and semantic surprisal. We find that, among training data of comparable quality, higher consistency in these response elements leads to better LLM performance. Inspired by this, we introduce Style Consistency-Aware Response Ranking (SCAR), which automatically prioritizes instruction-response pairs in the training set based on their response stylistic consistency. By selecting the most style-consistent examples, sometimes as few as 0.7% of the full dataset, the fine-tuned LLMs can match or even surpass the performance of models trained on the entire dataset in coding and open-ended question-answering benchmarks. Code and data are available at this https URL .
Submission history
From: Zhuang Li [view email]
[v1]
Sun, 16 Jun 2024 10:10:37 UTC (1,104 KB)
[v2]
Mon, 1 Jul 2024 14:55:01 UTC (1,104 KB)
[v3]
Sat, 6 Jul 2024 09:29:54 UTC (1,104 KB)
[v4]
Wed, 10 Jul 2024 08:22:10 UTC (1,104 KB)
[v5]
Wed, 2 Oct 2024 16:46:54 UTC (1,562 KB)
Source link
lol