View a PDF of the paper titled SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network, by Tianlong Li and 8 other authors
Abstract:Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency. Despite these advancements, the integration of linguistic and visual features into a unified representation through spike trains poses a significant challenge, and the application of SNNs to multimodal scenarios remains largely unexplored. This paper presents SpikeCLIP, a novel framework designed to bridge the modality gap in spike-based computation. Our approach employs a two-step recipe: an “alignment pre-training” to align features across modalities, followed by a “dual-loss fine-tuning” to refine the model’s performance. Extensive experiments reveal that SNNs achieve results on par with ANNs while substantially reducing energy consumption across various datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust image classification capabilities, even when dealing with classes that fall outside predefined categories. This study marks a significant advancement in the development of energy-efficient and biologically plausible multimodal learning systems.
Submission history
From: Tianlong Li [view email]
[v1]
Tue, 10 Oct 2023 09:57:17 UTC (934 KB)
[v2]
Thu, 12 Oct 2023 03:23:40 UTC (934 KB)
[v3]
Tue, 10 Sep 2024 06:36:25 UTC (7,137 KB)
Source link
lol