VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models, by Haowen Hou and Peigen Zeng and Fei Ma and Fei Richard Yu

View PDF
HTML (experimental)

Abstract:Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. Compared to LLaVA-1.5, VisualRWKV has a speed advantage of 3.98 times and can save 54% of GPU memory when reaching an inference length of 24K tokens. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: see this https URL.

Submission history

From: Haowen Hou [view email]
[v1]
Wed, 19 Jun 2024 09:07:31 UTC (1,773 KB)
[v2]
Tue, 17 Dec 2024 09:46:19 UTC (1,775 KB)
[v3]
Thu, 19 Dec 2024 05:26:14 UTC (1,775 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.