RRWKV: Capturing Long-range Dependencies in RWKV

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


This paper has been withdrawn by Leilei Wang

View a PDF of the paper titled RRWKV: Capturing Long-range Dependencies in RWKV, by Leilei Wang

No PDF available, click to view other formats

Abstract:Owing to the impressive dot-product attention, the Transformers have been the dominant architectures in various natural language processing (NLP) tasks. Recently, the Receptance Weighted Key Value (RWKV) architecture follows a non-transformer architecture to eliminate the drawbacks of dot-product attention, where memory and computational complexity exhibits quadratic scaling with sequence length. Although RWKV has exploited a linearly tensor-product attention mechanism and achieved parallelized computations by deploying the time-sequential mode, it fails to capture long-range dependencies because of its limitation on looking back at previous information, compared with full information obtained by direct interactions in the standard transformer. Therefore, the paper devises the Retrospected Receptance Weighted Key Value (RRWKV) architecture via incorporating the retrospecting ability into the RWKV to effectively absorb information, which maintains memory and computational efficiency as well.

Submission history

From: Leilei Wang [view email]
[v1]
Thu, 8 Jun 2023 13:17:06 UTC (41 KB)
[v2]
Fri, 9 Jun 2023 02:56:20 UTC (41 KB)
[v3]
Wed, 11 Sep 2024 05:31:10 UTC (1 KB) (withdrawn)
[v4]
Fri, 13 Sep 2024 08:58:47 UTC (1 KB) (withdrawn)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.