SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering
SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering
Audio-Visual Question Answering (AVQA) is a challenging task that involves answering questions based on both auditory and visual information in videos. A significant challenge is interpreting complex multi-modal scenes, which include both visual objects and sound sources, and connecting them to the given question. In this paper, we introduce the Source-aware Semantic Representation Network (SaSR-Net), a novel model designed for AVQA. SaSR-Net utilizes source-wise learnable tokens to efficiently capture and align audio-visual elements with the corresponding question. It streamlines the fusion of audio and visual information using spatial and temporal attention mechanisms to identify answers in multi-modal scenes. Extensive experiments on the Music-AVQA and AVQA-Yang datasets show that SaSR-Net outperforms state-of-the-art AVQA methods.
Yiyang Nan、Xiangliang Zhang、Tianyu Yang、Lisen Dai、Yapeng Tian、Zhenwen Liang
通信
Yiyang Nan,Xiangliang Zhang,Tianyu Yang,Lisen Dai,Yapeng Tian,Zhenwen Liang.SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering[EB/OL].(2024-11-07)[2025-04-27].https://arxiv.org/abs/2411.04933.点此复制
评论