Scaling Efficient LLMs

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Scaling Efficient LLMs, by B.N. Kausik

View PDF

Abstract:Trained LLMs are typically sparse in that most of the parameters are zero, raising questions on efficiency. In response, we inquire into efficient LLMs, i.e. those with the fewest parameters that achieve the desired accuracy on a training corpus. Specifically, we compare theoretical and empirical estimates for training loss to obtain upper and lower bounds on the number of unique sequences in a natural training corpus as a function of its size. Our result implies (1) to double the number of skills represented in a training corpus, the corpus must scale more than four fold (2) for efficient LLMs, the number of parameters N and the size D of a natural training corpus scale as $N propto D^{0.44}$; (3) if the number of parameters of an LLM is smaller than the number of unique sequences in the training corpus, scaling up can uncover emergent skills.

Submission history

From: B.N. Kausik [view email]
[v1]
Thu, 22 Feb 2024 18:06:19 UTC (184 KB)
[v2]
Mon, 6 Jan 2025 18:25:13 UTC (309 KB)
[v3]
Tue, 7 Jan 2025 16:58:05 UTC (309 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.