Unified Attention Modeling for Efficient Free-Viewing and Visual Search via Shared Representations
Unified Attention Modeling for Efficient Free-Viewing and Visual Search via Shared Representations
Computational human attention modeling in free-viewing and task-specific settings is often studied separately, with limited exploration of whether a common representation exists between them. This work investigates this question and proposes a neural network architecture that builds upon the Human Attention transformer (HAT) to test the hypothesis. Our results demonstrate that free-viewing and visual search can efficiently share a common representation, allowing a model trained in free-viewing attention to transfer its knowledge to task-driven visual search with a performance drop of only 3.86% in the predicted fixation scanpaths, measured by the semantic sequence score (SemSS) metric which reflects the similarity between predicted and human scanpaths. This transfer reduces computational costs by 92.29% in terms of GFLOPs and 31.23% in terms of trainable parameters.
Fatma Youssef Mohammed、Kostas Alexis
计算技术、计算机技术
Fatma Youssef Mohammed,Kostas Alexis.Unified Attention Modeling for Efficient Free-Viewing and Visual Search via Shared Representations[EB/OL].(2025-06-03)[2025-06-23].https://arxiv.org/abs/2506.02764.点此复制
评论