View a PDF of the paper titled How to use model architecture and training environment to estimate the energy consumption of DL training, by Santiago del Rey and 3 other authors
Abstract:To raise awareness of the huge impact Deep Learning (DL) has on the environment, several works have tried to estimate the energy consumption and carbon footprint of DL-based systems across their life cycle. However, the estimations for energy consumption in the training stage usually rely on assumptions that have not been thoroughly tested. This study aims to move past these assumptions by leveraging the relationship between energy consumption and two relevant design decisions in DL training; model architecture, and training environment. To investigate these relationships, we collect multiple metrics related to energy efficiency and model correctness during the models’ training. Then, we outline the trade-offs between the measured energy consumption and the models’ correctness regarding model architecture, and their relationship with the training environment. Finally, we study the training’s power consumption behavior and propose four new energy estimation methods. Our results show that selecting the proper model architecture and training environment can reduce energy consumption dramatically (up to 80.72%) at the cost of negligible decreases in correctness. Also, we find evidence that GPUs should scale with the models’ computational complexity for better energy efficiency. Furthermore, we prove that current energy estimation methods are unreliable and propose alternatives 2x more precise.
Submission history
From: Santiago del Rey [view email]
[v1]
Fri, 7 Jul 2023 12:07:59 UTC (318 KB)
[v2]
Tue, 18 Jul 2023 10:54:51 UTC (318 KB)
[v3]
Wed, 3 Jan 2024 15:20:31 UTC (170 KB)
[v4]
Thu, 21 Nov 2024 19:09:26 UTC (1,451 KB)
Source link
lol