Universal Retrieval for Multimodal Trajectory Modeling
Universal Retrieval for Multimodal Trajectory Modeling
Trajectory data, capturing human actions and environmental states across various modalities, holds significant potential for enhancing AI agent capabilities, particularly in GUI environments. However, how to model the representation of trajectory-level data presents a significant challenge that has not been systematically addressed amid explosive trajectory data growth. In this work, we introduce Multimodal Trajectory Retrieval, bridging the gap between universal retrieval and agent-centric trajectory modeling. We construct the Unified Agent Trajectory Dataset (UATD) from annotated demonstrations and states across diverse real-world scenarios. Based on this, we present GAE-Bench, a benchmark containing a large number of trajectory-based retrieval pairs. In addition, we propose GAE-Retriever, a multimodal retrieval framework that adopts vision-language models and incorporates optimized contrastive learning through a token selection and the GradCache mechanism. Comprehensive evaluations across multiple datasets show that GAE-Retriever consistently outperforms strong baselines in retrieval recall, highlighting its effectiveness in advancing multimodal trajectory retrieval.
Xuan Zhang、Ziyan Jiang、Rui Meng、Yifei Leng、Zhenbang Xiao、Zora Zhiruo Wang、Yanyi Shang、Dehan Kong
计算技术、计算机技术
Xuan Zhang,Ziyan Jiang,Rui Meng,Yifei Leng,Zhenbang Xiao,Zora Zhiruo Wang,Yanyi Shang,Dehan Kong.Universal Retrieval for Multimodal Trajectory Modeling[EB/OL].(2025-06-27)[2025-07-17].https://arxiv.org/abs/2506.22056.点此复制
评论