|国家预印本平台
首页|Multimodal Alignment with Cross-Attentive GRUs for Fine-Grained Video Understanding

Multimodal Alignment with Cross-Attentive GRUs for Fine-Grained Video Understanding

Multimodal Alignment with Cross-Attentive GRUs for Fine-Grained Video Understanding

来源:Arxiv_logoArxiv
英文摘要

Fine-grained video classification requires understanding complex spatio-temporal and semantic cues that often exceed the capacity of a single modality. In this paper, we propose a multimodal framework that fuses video, image, and text representations using GRU-based sequence encoders and cross-modal attention mechanisms. The model is trained using a combination of classification or regression loss, depending on the task, and is further regularized through feature-level augmentation and autoencoding techniques. To evaluate the generality of our framework, we conduct experiments on two challenging benchmarks: the DVD dataset for real-world violence detection and the Aff-Wild2 dataset for valence-arousal estimation. Our results demonstrate that the proposed fusion strategy significantly outperforms unimodal baselines, with cross-attention and feature augmentation contributing notably to robustness and performance.

Namho Kim、Junhwa Kim

计算技术、计算机技术

Namho Kim,Junhwa Kim.Multimodal Alignment with Cross-Attentive GRUs for Fine-Grained Video Understanding[EB/OL].(2025-07-04)[2025-07-21].https://arxiv.org/abs/2507.03531.点此复制

评论