Performance Law of Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Performance Law of Large Language Models, by Chuhan Wu and 1 other authors

View PDF
HTML (experimental)

Abstract:Guided by the belief of the scaling law, large language models (LLMs) have achieved impressive performance in recent years. However, scaling law only gives a qualitative estimation of loss, which is influenced by various factors such as model architectures, data distributions, tokenizers, and computation precision. Thus, estimating the real performance of LLMs with different training settings rather than loss may be quite useful in practical development. In this article, we present an empirical equation named “Performance Law” to directly predict the MMLU score of an LLM, which is a widely used metric to indicate the general capability of LLMs in real-world conversations and applications. Based on only a few key hyperparameters of the LLM architecture and the size of training data, we obtain a quite accurate MMLU prediction of various LLMs with diverse sizes and architectures developed by different organizations in different years. Performance law can be used to guide the choice of LLM architecture and the effective allocation of computational resources without extensive experiments.

Submission history

From: Chuhan Wu [view email]
[v1]
Mon, 19 Aug 2024 11:09:12 UTC (128 KB)
[v2]
Fri, 23 Aug 2024 12:14:18 UTC (128 KB)
[v3]
Tue, 10 Sep 2024 02:12:29 UTC (129 KB)
[v4]
Fri, 13 Sep 2024 12:28:45 UTC (129 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.