|国家预印本平台
首页|Cultural Bias in Large Language Models: Evaluating AI Agents through Moral Questionnaires

Cultural Bias in Large Language Models: Evaluating AI Agents through Moral Questionnaires

Cultural Bias in Large Language Models: Evaluating AI Agents through Moral Questionnaires

来源:Arxiv_logoArxiv
英文摘要

Are AI systems truly representing human values, or merely averaging across them? Our study suggests a concerning reality: Large Language Models (LLMs) fail to represent diverse cultural moral frameworks despite their linguistic capabilities. We expose significant gaps between AI-generated and human moral intuitions by applying the Moral Foundations Questionnaire across 19 cultural contexts. Comparing multiple state-of-the-art LLMs' origins against human baseline data, we find these models systematically homogenize moral diversity. Surprisingly, increased model size doesn't consistently improve cultural representation fidelity. Our findings challenge the growing use of LLMs as synthetic populations in social science research and highlight a fundamental limitation in current AI alignment approaches. Without data-driven alignment beyond prompting, these systems cannot capture the nuanced, culturally-specific moral intuitions. Our results call for more grounded alignment objectives and evaluation metrics to ensure AI systems represent diverse human values rather than flattening the moral landscape.

Simon Münker

文化理论科学、科学研究

Simon Münker.Cultural Bias in Large Language Models: Evaluating AI Agents through Moral Questionnaires[EB/OL].(2025-07-31)[2025-08-02].https://arxiv.org/abs/2507.10073.点此复制

评论