|国家预印本平台
首页|A Safe Harbor for AI Evaluation and Red Teaming

A Safe Harbor for AI Evaluation and Red Teaming

A Safe Harbor for AI Evaluation and Red Teaming

来源:Arxiv_logoArxiv
英文摘要

Independent evaluation and red teaming are critical for identifying the risks posed by generative AI systems. However, the terms of service and enforcement strategies used by prominent AI companies to deter model misuse have disincentives on good faith safety evaluations. This causes some researchers to fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal. Although some companies offer researcher access programs, they are an inadequate substitute for independent research access, as they have limited community representation, receive inadequate funding, and lack independence from corporate incentives. We propose that major AI developers commit to providing a legal and technical safe harbor, indemnifying public interest safety research and protecting it from the threat of account suspensions or legal reprisal. These proposals emerged from our collective experience conducting safety, privacy, and trustworthiness research on generative AI systems, where norms and incentives could be better aligned with public interests, without exacerbating model misuse. We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.

Zheng-Xin Yong、Suhas Kotha、Patrick Chao、Diyi Yang、Ashwin Ramaswami、Daniel Kang、Yangsibo Huang、Alexander Robey、Shayne Longpre、Sayash Kapoor、Arvind Narayanan、Xianjun Yang、Ruoxi Jia、Aviya Skowron、Kevin Klyman、Borhane Blili-Hamelin、Weiyan Shi、Peter Henderson、Rishi Bommasani、Percy Liang、Sandy Pentland、Reid Southen、Yi Zeng

科学、科学研究计算技术、计算机技术

Zheng-Xin Yong,Suhas Kotha,Patrick Chao,Diyi Yang,Ashwin Ramaswami,Daniel Kang,Yangsibo Huang,Alexander Robey,Shayne Longpre,Sayash Kapoor,Arvind Narayanan,Xianjun Yang,Ruoxi Jia,Aviya Skowron,Kevin Klyman,Borhane Blili-Hamelin,Weiyan Shi,Peter Henderson,Rishi Bommasani,Percy Liang,Sandy Pentland,Reid Southen,Yi Zeng.A Safe Harbor for AI Evaluation and Red Teaming[EB/OL].(2024-03-07)[2025-07-23].https://arxiv.org/abs/2403.04893.点此复制

评论