|国家预印本平台
首页|基于注意力因子分解机和深度神经网络的推荐算法

基于注意力因子分解机和深度神经网络的推荐算法

eepAFM:Attentional Factorization-Machine based on Deep Neural Network for Recommendation

中文摘要英文摘要

推荐算法一直是研究的热点,它是一种利用用户历史行为数据,结合用户特征等信息,通过算法模型进行分析和处理,从而为用户提供个性化推荐的技术。推荐算法可以提高用户的满意度和使用体验,也可以提高电商、广告等业务的转化率和效益。本文针对DeepFM算法中特征权重相同,导致不重要的特征交互浪费算力,阻碍模型性能提升的问题。参照AFM模型中Attention Net的设计,改进提出了DeepAFM(Attentional Factorization-Machine based on Deep Neural Network)算法,为二阶部分的每个交叉特征赋权,解决特征稀疏问题,有效学习特征间的交叉关系,避免人工特征工程,减少算力浪费,提高模型效率和准确性。通过在两个公开数据集上与其他六个基线算法的对比实验证明,DeepAFM算法有更好的准确性表现,算法效率与可解释性也更强。

Recommendation algorithm has been a hot topic of research, which is a technology that uses historical user behavior data, combined with user characteristics and other information, and analyzes and processes them through algorithmic models to provide personalized recommendations to users. Recommendation algorithms can improve user satisfaction and usage experience, as well as increase the conversion rate and efficiency of e-commerce, advertising and other businesses. In this paper, we address the problem of identical feature weights in DeepFM algorithm, which leads to unimportant feature interactions wasting arithmetic power and hindering model performance improvement. With reference to the design of Attention Net in AFM model, DeepAFM (Attentional Factorization-Machine based on Deep Neural Network) algorithm is improved and proposed to assign weights to each intersection feature in the second-order part to solve the feature sparsity problem and effectively learn the inter-feature cross-relationships, avoid manual feature engineering, reduce arithmetic waste, and improve model efficiency and readiness. Experiments comparing the DeepAFM algorithm with six other baseline algorithms on two publicly available datasets demonstrate that the DeepAFM algorithm has better accuracy performance, and the algorithm is more efficient and interpretable.

阿荣、漆涛

计算技术、计算机技术

推荐算法深度学习注意力机制

recommendation algorithmdeep learningattentional mechanisms

阿荣,漆涛.基于注意力因子分解机和深度神经网络的推荐算法[EB/OL].(2023-03-13)[2025-08-06].http://www.paper.edu.cn/releasepaper/content/202303-131.点此复制

评论