|国家预印本平台
首页|Generating Long Semantic IDs in Parallel for Recommendation

Generating Long Semantic IDs in Parallel for Recommendation

Generating Long Semantic IDs in Parallel for Recommendation

来源:Arxiv_logoArxiv
英文摘要

Semantic ID-based recommendation models tokenize each item into a small number of discrete tokens that preserve specific semantics, leading to better performance, scalability, and memory efficiency. While recent models adopt a generative approach, they often suffer from inefficient inference due to the reliance on resource-intensive beam search and multiple forward passes through the neural sequence model. As a result, the length of semantic IDs is typically restricted (e.g. to just 4 tokens), limiting their expressiveness. To address these challenges, we propose RPG, a lightweight framework for semantic ID-based recommendation. The key idea is to produce unordered, long semantic IDs, allowing the model to predict all tokens in parallel. We train the model to predict each token independently using a multi-token prediction loss, directly integrating semantics into the learning objective. During inference, we construct a graph connecting similar semantic IDs and guide decoding to avoid generating invalid IDs. Experiments show that scaling up semantic ID length to 64 enables RPG to outperform generative baselines by an average of 12.6% on the NDCG@10, while also improving inference efficiency. Code is available at: https://github.com/facebookresearch/RPG_KDD2025.

Yupeng Hou、Jiacheng Li、Ashley Shin、Jinsung Jeon、Abhishek Santhanam、Wei Shao、Kaveh Hassani、Ning Yao、Julian McAuley

计算技术、计算机技术

Yupeng Hou,Jiacheng Li,Ashley Shin,Jinsung Jeon,Abhishek Santhanam,Wei Shao,Kaveh Hassani,Ning Yao,Julian McAuley.Generating Long Semantic IDs in Parallel for Recommendation[EB/OL].(2025-06-06)[2025-08-02].https://arxiv.org/abs/2506.05781.点此复制

评论