arXiv:2408.05421v1 Announce Type: new
Abstract: Existing multimodal-based human action recognition approaches are either computationally expensive, which limits their applicability in real-time scenarios, or fail to exploit the spatial temporal information of multiple data modalities. In this work, we present an efficient pose-driven attention-guided multimodal network (EPAM-Net) for action recognition in videos. Specifically, we adapted X3D networks for both RGB and pose streams to capture spatio-temporal features from RGB videos and their skeleton sequences. Then skeleton features are utilized to help the visual network stream focusing on key frames and their salient spatial regions using a spatial temporal attention block. Finally, the scores of the two streams of the proposed network are fused for final classification. The experimental results show that our method achieves competitive performance on NTU-D 60 and NTU RGB-D 120 benchmark datasets. Moreover, our model provides a 6.2–9.9x reduction in FLOPs (floating-point operation, in number of multiply-adds) and a 9–9.6x reduction in the number of network parameters. The code will be available at https://github.com/ahmed-nady/Multimodal-Action-Recognition.
Source link
lol