Modeling Subjectivity in Cognitive Appraisal with Language Models
Modeling Subjectivity in Cognitive Appraisal with Language Models
As the utilization of language models in interdisciplinary, human-centered studies grow, the expectation of model capabilities continues to evolve. Beyond excelling at conventional tasks, models are recently expected to perform well on user-centric measurements involving confidence and human (dis)agreement -- factors that reflect subjective preferences. While modeling of subjectivity plays an essential role in cognitive science and has been extensively studied, it remains under-explored within the NLP community. In light of this gap, we explore how language models can harness subjectivity by conducting comprehensive experiments and analysis across various scenarios using both fine-tuned models and prompt-based large language models (LLMs). Our quantitative and qualitative experimental results indicate that existing post-hoc calibration approaches often fail to produce satisfactory results. However, our findings reveal that personality traits and demographical information are critical for measuring subjectivity. Furthermore, our in-depth analysis offers valuable insights for future research and development in the interdisciplinary studies of NLP and cognitive science.
Hainiu Xu、Yulan He、Petr Slovak、Desmond C. Ong、Yuxiang Zhou
信息传播、知识传播科学、科学研究
Hainiu Xu,Yulan He,Petr Slovak,Desmond C. Ong,Yuxiang Zhou.Modeling Subjectivity in Cognitive Appraisal with Language Models[EB/OL].(2025-03-14)[2025-08-02].https://arxiv.org/abs/2503.11381.点此复制
评论