View a PDF of the paper titled SpaceByte: Towards Deleting Tokenization from Large Language Modeling, by Kevin Slagle
Abstract:Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures.
Submission history
From: Kevin Slagle [view email]
[v1]
Mon, 22 Apr 2024 17:59:29 UTC (107 KB)
[v2]
Thu, 23 May 2024 16:41:41 UTC (109 KB)
[v3]
Sun, 6 Oct 2024 02:17:26 UTC (109 KB)
Source link
lol