View a PDF of the paper titled Reorganizing attention-space geometry with expressive attention, by Claudius Gros
Abstract:Attention regulates information transfer between tokens. For this, query and key vectors are compared, typically in terms of a scalar product, $mathbf{Q}^Tmathbf{K}$, together with a subsequent softmax normalization. In geometric terms, the standard dot-product attention (DPA) leads to large/small attention weights for parallel/antiparallel queries and keys. Here we study expressive attention (EA), which is based on $(mathbf{Q}^Tmathbf{K})^2$, the squared dot product. In this case, attention is enhanced when query and key are either parallel or antiparallel, and suppressed for orthogonal configurations. EA can be introduced into any attention-based code without additional compute costs or memory requirements. For a series of autoregressive prediction tasks, we find that expressive attention performs at least as well as vanilla DPA. Increasing task complexity, EA is observed to outperform DPA with increasing margins, which also holds for multi-task settings. For a given model size, EA manages to achieve 100% performance for a range of complexity levels not accessible to DPA. Our results show that it is possible to reorganize the geometry of the matching condition in the space of attention heads without loss of performance.
Submission history
From: Claudius Gros [view email]
[v1]
Fri, 26 Jul 2024 08:41:58 UTC (351 KB)
[v2]
Wed, 8 Jan 2025 09:30:47 UTC (360 KB)
Source link
lol