|国家预印本平台
首页|Do Large Language Models Understand Morality Across Cultures?

Do Large Language Models Understand Morality Across Cultures?

Do Large Language Models Understand Morality Across Cultures?

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in large language models (LLMs) have established them as powerful tools across numerous domains. However, persistent concerns about embedded biases, such as gender, racial, and cultural biases arising from their training data, raise significant questions about the ethical use and societal consequences of these technologies. This study investigates the extent to which LLMs capture cross-cultural differences and similarities in moral perspectives. Specifically, we examine whether LLM outputs align with patterns observed in international survey data on moral attitudes. To this end, we employ three complementary methods: (1) comparing variances in moral scores produced by models versus those reported in surveys, (2) conducting cluster alignment analyses to assess correspondence between country groupings derived from LLM outputs and survey data, and (3) directly probing models with comparative prompts using systematically chosen token pairs. Our results reveal that current LLMs often fail to reproduce the full spectrum of cross-cultural moral variation, tending to compress differences and exhibit low alignment with empirical survey patterns. These findings highlight a pressing need for more robust approaches to mitigate biases and improve cultural representativeness in LLMs. We conclude by discussing the implications for the responsible development and global deployment of LLMs, emphasizing fairness and ethical alignment.

Hadi Mohammadi、Yasmeen F. S. S. Meijer、Efthymia Papadopoulou、Ayoub Bagheri

科学、科学研究信息传播、知识传播文化理论

Hadi Mohammadi,Yasmeen F. S. S. Meijer,Efthymia Papadopoulou,Ayoub Bagheri.Do Large Language Models Understand Morality Across Cultures?[EB/OL].(2025-07-28)[2025-08-11].https://arxiv.org/abs/2507.21319.点此复制

评论