|国家预印本平台
首页|Tuning Language Models for Robust Prediction of Diverse User Behaviors

Tuning Language Models for Robust Prediction of Diverse User Behaviors

Tuning Language Models for Robust Prediction of Diverse User Behaviors

来源:Arxiv_logoArxiv
英文摘要

Predicting user behavior is essential for intelligent assistant services, yet deep learning models often struggle to capture long-tailed behaviors. Large language models (LLMs), with their pretraining on vast corpora containing rich behavioral knowledge, offer promise. However, existing fine-tuning approaches tend to overfit to frequent ``anchor'' behaviors, reducing their ability to predict less common ``tail'' behaviors. In this paper, we introduce BehaviorLM, a progressive fine-tuning approach that addresses this issue. In the first stage, LLMs are fine-tuned on anchor behaviors while preserving general behavioral knowledge. In the second stage, fine-tuning uses a balanced subset of all behaviors based on sample difficulty to improve tail behavior predictions without sacrificing anchor performance. Experimental results on two real-world datasets demonstrate that BehaviorLM robustly predicts both anchor and tail behaviors and effectively leverages LLM behavioral knowledge to master tail behavior prediction with few-shot examples.

Fanjin Meng、Jingtao Ding、Jiahui Gong、Chen Yang、Hong Chen、Zuojian Wang、Haisheng Lu、Yong Li

计算技术、计算机技术

Fanjin Meng,Jingtao Ding,Jiahui Gong,Chen Yang,Hong Chen,Zuojian Wang,Haisheng Lu,Yong Li.Tuning Language Models for Robust Prediction of Diverse User Behaviors[EB/OL].(2025-05-23)[2025-06-07].https://arxiv.org/abs/2505.17682.点此复制

评论