|国家预印本平台
首页|LLM-Enhanced Reranking for Complementary Product Recommendation

LLM-Enhanced Reranking for Complementary Product Recommendation

LLM-Enhanced Reranking for Complementary Product Recommendation

来源:Arxiv_logoArxiv
英文摘要

Complementary product recommendation, which aims to suggest items that are used together to enhance customer value, is a crucial yet challenging task in e-commerce. While existing graph neural network (GNN) approaches have made significant progress in capturing complex product relationships, they often struggle with the accuracy-diversity tradeoff, particularly for long-tail items. This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complementary product recommendations. Unlike previous works that use LLMs primarily for data preprocessing and graph augmentation, our method applies LLM-based prompting strategies directly to rerank candidate items retrieved from existing recommendation models, eliminating the need for model retraining. Through extensive experiments on public datasets, we demonstrate that our approach effectively balances accuracy and diversity in complementary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top recommended items across datasets.

Zekun Xu、Yudi Zhang

计算技术、计算机技术

Zekun Xu,Yudi Zhang.LLM-Enhanced Reranking for Complementary Product Recommendation[EB/OL].(2025-07-22)[2025-08-10].https://arxiv.org/abs/2507.16237.点此复制

评论