|国家预印本平台
首页|Reinforcing User Interest Evolution in Multi-Scenario Learning for recommender systems

Reinforcing User Interest Evolution in Multi-Scenario Learning for recommender systems

Reinforcing User Interest Evolution in Multi-Scenario Learning for recommender systems

来源:Arxiv_logoArxiv
英文摘要

In real-world recommendation systems, users would engage in variety scenarios, such as homepages, search pages, and related recommendation pages. Each of these scenarios would reflect different aspects users focus on. However, the user interests may be inconsistent in different scenarios, due to differences in decision-making processes and preference expression. This variability complicates unified modeling, making multi-scenario learning a significant challenge. To address this, we propose a novel reinforcement learning approach that models user preferences across scenarios by modeling user interest evolution across multiple scenarios. Our method employs Double Q-learning to enhance next-item prediction accuracy and optimizes contrastive learning loss using Q-value to make model performance better. Experimental results demonstrate that our approach surpasses state-of-the-art methods in multi-scenario recommendation tasks. Our work offers a fresh perspective on multi-scenario modeling and highlights promising directions for future research.

Zhijian Feng、Wenhao Zheng、Xuanji Xiao

计算技术、计算机技术

Zhijian Feng,Wenhao Zheng,Xuanji Xiao.Reinforcing User Interest Evolution in Multi-Scenario Learning for recommender systems[EB/OL].(2025-06-21)[2025-07-16].https://arxiv.org/abs/2506.17682.点此复制

评论