Efficient Parallelization Layouts for Large-Scale Distributed Model Training

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Efficient Parallelization Layouts for Large-Scale Distributed Model Training, by Johannes Hagemann and 4 other authors

View PDF

Abstract:Efficiently training large language models requires parallelizing across hundreds of hardware accelerators and invoking various compute and memory optimizations. When combined, many of these strategies have complex interactions regarding the final training efficiency. Prior work tackling this problem did not have access to the latest set of optimizations, such as FlashAttention or sequence parallelism. In this work, we conduct a comprehensive ablation study of possible training configurations for large language models. We distill this large study into several key recommendations for the most efficient training. For instance, we find that using a micro-batch size of 1 usually enables the most efficient training layouts. Larger micro-batch sizes necessitate activation checkpointing or higher degrees of model parallelism and also lead to larger pipeline bubbles. Our most efficient configurations enable us to achieve state-of-the-art training efficiency results over a range of model sizes, most notably a Model FLOPs utilization of 70.5% when training a Llama 13B model.

Submission history

From: Konstantin Dobler [view email]
[v1]
Thu, 9 Nov 2023 18:59:38 UTC (443 KB)
[v2]
Sun, 10 Dec 2023 14:56:18 UTC (444 KB)
[v3]
Tue, 24 Sep 2024 15:42:51 UTC (472 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.