HIPPO-Video: Simulating Watch Histories with Large Language Models for Personalized Video Highlighting
HIPPO-Video: Simulating Watch Histories with Large Language Models for Personalized Video Highlighting
The exponential growth of video content has made personalized video highlighting an essential task, as user preferences are highly variable and complex. Existing video datasets, however, often lack personalization, relying on isolated videos or simple text queries that fail to capture the intricacies of user behavior. In this work, we introduce HIPPO-Video, a novel dataset for personalized video highlighting, created using an LLM-based user simulator to generate realistic watch histories reflecting diverse user preferences. The dataset includes 2,040 (watch history, saliency score) pairs, covering 20,400 videos across 170 semantic categories. To validate our dataset, we propose HiPHer, a method that leverages these personalized watch histories to predict preference-conditioned segment-wise saliency scores. Through extensive experiments, we demonstrate that our method outperforms existing generic and query-based approaches, showcasing its potential for highly user-centric video highlighting in real-world scenarios.
Jeongeun Lee、Youngjae Yu、Dongha Lee
计算技术、计算机技术
Jeongeun Lee,Youngjae Yu,Dongha Lee.HIPPO-Video: Simulating Watch Histories with Large Language Models for Personalized Video Highlighting[EB/OL].(2025-07-22)[2025-08-10].https://arxiv.org/abs/2507.16873.点此复制
评论