|国家预印本平台
首页|Investigating Algorithmic Bias in YouTube Shorts

Investigating Algorithmic Bias in YouTube Shorts

Investigating Algorithmic Bias in YouTube Shorts

来源:Arxiv_logoArxiv
英文摘要

The rapid growth of YouTube Shorts, now serving over 2 billion monthly users, reflects a global shift toward short-form video as a dominant mode of online content consumption. This study investigates algorithmic bias in YouTube Shorts' recommendation system by analyzing how watch-time duration, topic sensitivity, and engagement metrics influence content visibility and drift. We focus on three content domains: the South China Sea dispute, the 2024 Taiwan presidential election, and general YouTube Shorts content. Using generative AI models, we classified 685,842 videos across relevance, topic category, and emotional tone. Our results reveal a consistent drift away from politically sensitive content toward entertainment-focused videos. Emotion analysis shows a systematic preference for joyful or neutral content, while engagement patterns indicate that highly viewed and liked videos are disproportionately promoted, reinforcing popularity bias. This work provides the first comprehensive analysis of algorithmic drift in YouTube Shorts based on textual content, emotional tone, topic categorization, and varying watch-time conditions. These findings offer new insights into how algorithmic design shapes content exposure, with implications for platform transparency and information diversity.

Mert Can Cakmak、Nitin Agarwal、Diwash Poudel

计算技术、计算机技术

Mert Can Cakmak,Nitin Agarwal,Diwash Poudel.Investigating Algorithmic Bias in YouTube Shorts[EB/OL].(2025-07-07)[2025-07-16].https://arxiv.org/abs/2507.04605.点此复制

评论