[Submitted on 19 Dec 2024]
View a PDF of the paper titled TinyLLM: A Framework for Training and Deploying Language Models at the Edge Computers, by Savitha Viswanadh Kandala and 1 other authors
Abstract:Language models have gained significant interest due to their general-purpose capabilities, which appear to emerge as models are scaled to increasingly larger parameter sizes. However, these large models impose stringent requirements on computing systems, necessitating significant memory and processing requirements for inference. This makes performing inference on mobile and edge devices challenging, often requiring invocating remotely-hosted models via network calls. Remote inference, in turn, introduces issues like latency, unreliable network connectivity, and privacy concerns. To address these challenges, we explored the possibility of deviating from the trend of increasing model size. Instead, we hypothesize that much smaller models (~30-120M parameters) can outperform their larger counterparts for specific tasks by carefully curating the data used for pre-training and fine-tuning. We investigate this within the context of deploying edge-device models to support sensing applications. We trained several foundational models through a systematic study and found that small models can run locally on edge devices, achieving high token rates and accuracy. Based on these findings, we developed a framework that allows users to train foundational models tailored to their specific applications and deploy them at the edge.
Submission history
From: Savitha Viswanadh Kandala [view email]
[v1]
Thu, 19 Dec 2024 12:28:27 UTC (3,759 KB)
Source link
lol