Positional Encoding Helps Recurrent Neural Networks Handle a Large Vocabulary

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Positional Encoding Helps Recurrent Neural Networks Handle a Large Vocabulary, by Takashi Morita

View PDF
HTML (experimental)

Abstract:This study reports an unintuitive finding that positional encoding enhances learning of recurrent neural networks (RNNs). Positional encoding is a high-dimensional representation of time indices on input data. Most famously, positional encoding complements the capabilities of Transformer neural networks, which lack an inherent mechanism for representing the data order. By contrast, RNNs can encode the temporal information of data points on their own, rendering their use of positional encoding seemingly redundant/unnecessary. Nonetheless, investigations through synthetic benchmarks reveal an advantage of coupling positional encoding and RNNs, especially for handling a large vocabulary that yields low-frequency tokens. Further scrutinization unveils that these low-frequency tokens destabilizes the gradients of vanilla RNNs, and the positional encoding resolves this instability. These results shed a new light on the utility of positional encoding beyond its canonical role as a timekeeper for Transformers.

Submission history

From: Takashi Morita [view email]
[v1]
Wed, 31 Jan 2024 23:32:20 UTC (2,839 KB)
[v2]
Mon, 17 Jun 2024 04:34:10 UTC (813 KB)
[v3]
Tue, 18 Jun 2024 04:53:53 UTC (1,016 KB)
[v4]
Thu, 10 Oct 2024 16:40:57 UTC (1,257 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.