Emergence of Hierarchical Emotion Organization in Large Language Models
Emergence of Hierarchical Emotion Organization in Large Language Models
As large language models (LLMs) increasingly power conversational agents, understanding how they model users' emotional states is critical for ethical deployment. Inspired by emotion wheels -- a psychological framework that argues emotions organize hierarchically -- we analyze probabilistic dependencies between emotional states in model outputs. We find that LLMs naturally form hierarchical emotion trees that align with human psychological models, and larger models develop more complex hierarchies. We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups. Human studies reveal striking parallels, suggesting that LLMs internalize aspects of social perception. Beyond highlighting emergent emotional reasoning in LLMs, our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
Bo Zhao、Maya Okawa、Eric J. Bigelow、Rose Yu、Tomer Ullman、Ekdeep Singh Lubana、Hidenori Tanaka
计算技术、计算机技术
Bo Zhao,Maya Okawa,Eric J. Bigelow,Rose Yu,Tomer Ullman,Ekdeep Singh Lubana,Hidenori Tanaka.Emergence of Hierarchical Emotion Organization in Large Language Models[EB/OL].(2025-07-12)[2025-07-25].https://arxiv.org/abs/2507.10599.点此复制
评论