人工智能决策的道德缺失效应及其机制与应对策略
s artificial intelligence (AI) assumes an increasingly prominent role in high-stakes decision-making, the ethical challenges it raises have become a pressing concern. This paper systematically investigates the moral deficiency effect in AI decision making by integrating mind perception theory with moral dualism. Through this framework, we identify a dual-path psychological mechanism and propose targeted intervention strategies.Our first investigation, Study 1, explored the limitations of AI in moral judgment using scenarios rooted in the Chinese socio-cultural context. Across three representative situationseducational, age, and gender discriminationthe moral response scores for AI-generated decisions were significantly lower than for those made by human agents. These findings not only align with existing Western research on AIs moral judgment deficits but also suggest that the moral deficiency effect is generalizable across cultures.To understand why this deficiency occurs, Study 2 investigated the underlying psychological mechanisms. Drawing on mind perception theory and moral dualism, we proposed a dual-path mediation model involving perceived agency and perceived experience. We conducted three sub-studies that first tested these two mediators separately and then assessed their combined effects. Using experimental mediation, we provided the first causal evidence of how the decision-makers identity (AI vs. human) interacts with dimensions of mind perception. Specifically, when participants perceived an AI as having greater agency and experience, their moral approval of its decisions significantly increasedan effect not observed with human decision-makers. Structural equation modeling further confirmed a synergistic effect between the two paths, indicating their combined explanatory power exceeds that of either one alone. This suggests that in the real world, moral responses to AI are influenced simultaneously by both cognitive pathways.Building on these mechanistic insights, Study 3 tested intervention strategies to mitigate the AI-induced moral deficiency effect. In a double-blind, randomized controlled experiment, we evaluated two approaches: anthropomorphic design and mental expectancy enhancement. Both strategies significantly improved moral responses by increasing participants perceptions of the AIs agency and experience. Moreover, a combined intervention produced a stronger effect than either strategy did alone. Although these interventions target different elementsone focusing on the AI system and the other on human cognitionthey both operate through the shared mechanism of mind perception. By doing so, they effectively enhance moral accountability for an AIs unethical behavior, offering a practical pathway to address moral deficiencies in AI decision-making.Ultimately, this research provides a novel contribution to the field of "algorithmic ethics." Unlike traditional approaches that emphasize technical design principles and fairness algorithms, our study adopts a psychological perspective that centers on the human recipient of AI-driven decisions. Practically, we propose actionable intervention strategies grounded in mind perception, while our synergistic model provides a robust framework for AI ethical governance. Collectively, these findings deepen the understanding of moral judgment in AI contexts, guide the development of algorithmic accountability systems, and support the optimization of humanAI collaborationthereby establishing a critical psychological foundation for the ethical deployment of AI.
胡小勇、李穆峰、李悦、李凯、喻丰
武汉大学心理学系西南大学心理学部武汉大学心理学系武汉大学心理学系武汉大学心理学系
计算技术、计算机技术自动化技术经济自动化基础理论
人工智能道德缺失效应心智感知拟人化期望调整
artificial intelligencemoral deficit effectmind perceptionanthropomorphismexpectation adjustment
胡小勇,李穆峰,李悦,李凯,喻丰.人工智能决策的道德缺失效应及其机制与应对策略[EB/OL].(2025-09-07)[2025-09-12].https://chinaxiv.org/abs/202509.00059.点此复制
评论