LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation
LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation
Cross-Domain Sequential Recommendation (CDSR) predicts user behavior by leveraging historical interactions across multiple domains, focusing on modeling cross-domain preferences and capturing both intra- and inter-sequence item relationships. We propose LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation (LLM-EMF), a novel and advanced approach that enhances textual information with Large Language Models (LLM) knowledge and significantly improves recommendation performance through the fusion of visual and textual data. Using the frozen CLIP model, we generate image and text embeddings, thereby enriching item representations with multimodal data. A multiple attention mechanism jointly learns both single-domain and cross-domain preferences, effectively capturing and understanding complex user interests across diverse domains. Evaluations conducted on four e-commerce datasets demonstrate that LLM-EMF consistently outperforms existing methods in modeling cross-domain user preferences, thereby highlighting the effectiveness of multimodal data integration and its advantages in enhancing sequential recommendation systems. Our source code will be released.
Wangyu Wu、Zhenhong Chen、Xianglin Qiu、Siqi Song、Xiaowei Huang、Fei Ma、Jimin Xiao
计算技术、计算机技术
Wangyu Wu,Zhenhong Chen,Xianglin Qiu,Siqi Song,Xiaowei Huang,Fei Ma,Jimin Xiao.LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation[EB/OL].(2025-06-22)[2025-07-02].https://arxiv.org/abs/2506.17966.点此复制
评论