|国家预印本平台
| 注册
首页|人工智能主管提出的道德行为建议更少被遵从

人工智能主管提出的道德行为建议更少被遵从

赵一骏 许丽颖 喻丰

人工智能主管提出的道德行为建议更少被遵从

Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human

赵一骏 许丽颖 喻丰

作者信息

摘要

人工智能(Artificial Intelligence, AI)技术的蓬勃发展引起了组织中的巨大变革,其担当起能够直接影响员工行为的主管角色。六项递进的情境实验(N = 1642)试图探讨人们对由AI或人类主管提出道德行为建议的反应差异,以及其心理机制和边界条件。结果发现:相比于人类主管,人们对AI主管提出道德行为建议的遵从程度更低(实验1a~5),这是因为人们对与AI主管的互动存在更低的评价忧虑(实验2~3),而且当个体拟人化倾向越强或AI主管越拟人化时,人们对AI主管提出的道德行为建议越遵从(实验4~5)。研究结果有助于更好地理解人们对组织中AI主管的反应,并说明了AI主管在涉及道德引导领域的欠缺,为组织管理中AI领导力的部署提供实践参考和提升方案。

Abstract

The use of artificial intelligence (AI) in organizations has evolved from being a tool to being a supervisor. Although previous research has examined peoples reactions to AI supervisors in general, few studies have investigated the effectiveness of AI supervisors, specifically whether individuals adhere to their moral behavioral advices. The present research aims to compare employees adherence to moral behavioral advice given by AI and human supervisors, as well as identify potential psychological mechanisms and boundary conditions behind the possible differences.To test our research hypotheses, we conducted six experiments and three pilot experiemts (N = 1642, including 179 samples of pilot experiments) involving different types of moral behaviors in organizations, such as engaging in the activity to help the disabled, volunteering for environmental protection or child welfare, and making charitable donations for disasters or colleagues difficulties. Experiment 1a and 1b was a single-factor, two-level, between-subjects design. 180 participants were randomly assigned to two conditions: supervisors giving advice on moral behavior (human versus AI). Their adherence to the supervisors advice was measured in different scenarios. Experiment 2 followed the same design as Experiment 1, with additional measurements of evaluation apprehension and perceived mind to test the mediating role. To establish a causal chain between the mediator and the dependent variable and demonstrate the robustness of our findings, we further examined the underlying mechanism in Experiment 3. This experiment had a between-subjects design of 2 (supervisors: human versus AI) x 2 (evaluation apprehension: high versus low). Experiments 4 and 5 were designed to test the moderating role of anthropomorphism. In Experiment 4, participants tendency to anthropomorphize was measured, and in Experiment 5, the anthropomorphism of the AI supervisor was manipulated. As predicted, the present research found that, compared to a human supervisor, participants were less likely to follow the moral advice of an AI supervisor (Experiments 1a~5). The robustness of this finding was demonstrated by the diversity of our scenario settings and samples. And we also excluded the potential effects of perceived rational, negative emotions, exploitation, perceived autonomy and some individual differences (pilot experiment and emperiment 1a~1b). In addition, this research discovered evaluation apprehension as the underlying mechanism explaining employees adherence to advice from different supervisors. Participants believed that they would receive less social judgment and evaluation from an AI supervisor than a human supervisor. Consequently, they were less willing to adhere to the advice offered by the AI (Experiments 2~5). The present research also demonstrated the moderating effect of anthropomorphism (Experiments 4~5). In Experiment 4, for individuals with a high tendency towards anthropomorphism, there was no significant difference in their adherence to advice on moral behavior from human or AI supervisors; Participants with low anthropomorphism tendency showed greater adherence to a human supervisor than to an AI supervisor. In Experiment 5, participants demonstrated greater adherence to the AI supervisor with a human-like name and communication style compared to the mechanized AI supervisor.The study contributes to the literature on AI leadership by highlighting the limitations of AI supervisors in providing advice on moral behavior. Additionally, the results confirm the phenomenon of algorithm aversion in the moral domain, indicating that people are hesitant to accept AI involvement in moral decision-making, even in an advisory role. The study also identifies evaluation apprehension as a factor that influences adherence to AI advice. Individuals may be less likely to follow the advice of AI due to a decreased concern for potential social judgment in their interactions with AI supervisors. Finally, anthropomorphism may be a useful approach to enhance the effectiveness of AI supervisors.

关键词

AI主管/建议遵从/道德行为建议/评价忧虑/拟人化

Key words

artificial intelligence supervisor/advice adherence/advice on moral behavior/evaluation apprehension/anthropomorphism

引用本文复制引用

赵一骏,许丽颖,喻丰.人工智能主管提出的道德行为建议更少被遵从[EB/OL].(2024-09-04)[2026-04-05].https://chinaxiv.org/abs/202409.00082.

学科分类

计算技术、计算机技术/科学、科学研究

评论

首发时间 2024-09-04
下载量:0
|
点击量:14
段落导航相关论文