EMS-SD: Efficient Multi-sample Speculative Decoding for Accelerating Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled EMS-SD: Efficient Multi-sample Speculative Decoding for Accelerating Large Language Models, by Yunsheng Ni and 4 other authors

View PDF
HTML (experimental)

Abstract:Speculative decoding emerges as a pivotal technique for enhancing the inference speed of Large Language Models (LLMs). Despite recent research aiming to improve prediction efficiency, multi-sample speculative decoding has been overlooked due to varying numbers of accepted tokens within a batch in the verification phase. Vanilla method adds padding tokens in order to ensure that the number of new tokens remains consistent across samples. However, this increases the computational and memory access overhead, thereby reducing the speedup ratio. We propose a novel method that can resolve the issue of inconsistent tokens accepted by different samples without necessitating an increase in memory or computing overhead. Furthermore, our proposed method can handle the situation where the prediction tokens of different samples are inconsistent without the need to add padding tokens. Sufficient experiments demonstrate the efficacy of our method. Our code is available at this https URL.

Submission history

From: Yunsheng Ni [view email]
[v1]
Mon, 13 May 2024 08:24:21 UTC (221 KB)
[v2]
Mon, 14 Oct 2024 02:55:33 UTC (242 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.