Multi-Agent LLMs as Ethics Advocates in AI-Based Systems
Multi-Agent LLMs as Ethics Advocates in AI-Based Systems
Incorporating ethics into the requirement elicitation process is essential for creating ethically aligned systems. Although eliciting manual ethics requirements is effective, it requires diverse input from multiple stakeholders, which can be challenging due to time and resource constraints. Moreover, it is often given a low priority in the requirements elicitation process. This study proposes a framework for generating ethics requirements drafts by introducing an ethics advocate agent in a multi-agent LLM setting. This agent critiques and provides input on ethical issues based on the system description. The proposed framework is evaluated through two case studies from different contexts, demonstrating that it captures the majority of ethics requirements identified by researchers during 30-minute interviews and introduces several additional relevant requirements. However, it also highlights reliability issues in generating ethics requirements, emphasizing the need for human feedback in this sensitive domain. We believe this work can facilitate the broader adoption of ethics in the requirements engineering process, ultimately leading to more ethically aligned products.
Asma Yamani、Malak Baslyman、Moataz Ahmed
计算技术、计算机技术
Asma Yamani,Malak Baslyman,Moataz Ahmed.Multi-Agent LLMs as Ethics Advocates in AI-Based Systems[EB/OL].(2025-07-11)[2025-07-25].https://arxiv.org/abs/2507.08392.点此复制
评论