View a PDF of the paper titled Window-based Channel Attention for Wavelet-enhanced Learned Image Compression, by Heng Xu and 3 other authors
Abstract:Learned Image Compression (LIC) models have achieved superior rate-distortion performance than traditional codecs. Existing LIC models use CNN, Transformer, or Mixed CNN-Transformer as basic blocks. However, limited by the shifted window attention, Swin-Transformer-based LIC exhibits a restricted growth of receptive fields, affecting the ability to model large objects for image compression. To address this issue and improve the performance, we incorporate window partition into channel attention for the first time to obtain large receptive fields and capture more global information. Since channel attention hinders local information learning, it is important to extend existing attention mechanisms in Transformer codecs to the space-channel attention to establish multiple receptive fields, being able to capture global correlations with large receptive fields while maintaining detailed characterization of local correlations with small receptive fields. We also incorporate the discrete wavelet transform into our Spatial-Channel Hybrid (SCH) framework for efficient frequency-dependent down-sampling and further enlarging receptive fields. Experiment results demonstrate that our method achieves state-of-the-art performances, reducing BD-rate by 18.54%, 23.98%, 22.33%, and 24.71% on four standard datasets compared to VTM-23.1.
Submission history
From: Heng Xu [view email]
[v1]
Sat, 21 Sep 2024 10:08:52 UTC (11,833 KB)
[v2]
Wed, 9 Oct 2024 12:47:39 UTC (9,261 KB)
[v3]
Thu, 10 Oct 2024 14:10:44 UTC (9,264 KB)
[v4]
Fri, 11 Oct 2024 03:59:09 UTC (8,849 KB)
Source link
lol