|国家预印本平台
首页|Efficient Temporal Tokenization for Mobility Prediction with Large Language Models

Efficient Temporal Tokenization for Mobility Prediction with Large Language Models

Efficient Temporal Tokenization for Mobility Prediction with Large Language Models

来源:Arxiv_logoArxiv
英文摘要

We introduce RHYTHM (Reasoning with Hierarchical Temporal Tokenization for Human Mobility), a framework that leverages large language models (LLMs) as spatio-temporal predictors and trajectory reasoners. RHYTHM partitions trajectories into daily segments encoded as discrete tokens with hierarchical attention, capturing both daily and weekly dependencies while substantially reducing the sequence length. Token representations are enriched with pre-computed prompt embeddings via a frozen LLM, enhancing the model's ability to capture interdependencies without extensive computational overhead. By freezing the LLM backbone, RHYTHM achieves significant computational efficiency. Evaluation on three real-world datasets demonstrates a 2.4% improvement in accuracy, 5.0% increase on weekends, and 24.6% reduction in training time compared to state-of-the-art methods.

Haoyu He、Haozheng Luo、Yan Chen、Qi R. Wang

计算技术、计算机技术

Haoyu He,Haozheng Luo,Yan Chen,Qi R. Wang.Efficient Temporal Tokenization for Mobility Prediction with Large Language Models[EB/OL].(2025-07-18)[2025-08-05].https://arxiv.org/abs/2507.14017.点此复制

评论