Zyda: A 1.3T Dataset for Open Language Modeling

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Zyda: A 1.3T Dataset for Open Language Modeling, by Yury Tokpanov and 6 other authors

View PDF
HTML (experimental)

Abstract:The size of large language models (LLMs) has scaled dramatically in recent years and their computational and data requirements have surged correspondingly. State-of-the-art language models, even at relatively smaller sizes, typically require training on at least a trillion tokens. This rapid advancement has eclipsed the growth of open-source datasets available for large-scale LLM pretraining. In this paper, we introduce Zyda (Zyphra Dataset), a dataset under a permissive license comprising 1.3 trillion tokens, assembled by integrating several major respected open-source datasets into a single, high-quality corpus. We apply rigorous filtering and deduplication processes, both within and across datasets, to maintain and enhance the quality derived from the original datasets. Our evaluations show that Zyda not only competes favorably with other open datasets like Dolma, FineWeb, and RefinedWeb, but also substantially improves the performance of comparable models from the Pythia suite. Our rigorous data processing methods significantly enhance Zyda’s effectiveness, outperforming even the best of its constituent datasets when used independently.

Submission history

From: Quentin Anthony [view email]
[v1]
Tue, 4 Jun 2024 05:47:17 UTC (1,690 KB)
[v2]
Tue, 3 Sep 2024 19:11:11 UTC (1,692 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.