|国家预印本平台
首页|Designing a Dashboard for Transparency and Control of Conversational AI

Designing a Dashboard for Transparency and Control of Conversational AI

Designing a Dashboard for Transparency and Control of Conversational AI

来源:Arxiv_logoArxiv
英文摘要

Conversational LLMs function as black box systems, leaving users guessing about why they see the output they do. This lack of transparency is potentially problematic, especially given concerns around bias and truthfulness. To address this issue, we present an end-to-end prototype-connecting interpretability techniques with user experience design-that seeks to make chatbots more transparent. We begin by showing evidence that a prominent open-source LLM has a "user model": examining the internal state of the system, we can extract data related to a user's age, gender, educational level, and socioeconomic status. Next, we describe the design of a dashboard that accompanies the chatbot interface, displaying this user model in real time. The dashboard can also be used to control the user model and the system's behavior. Finally, we discuss a study in which users conversed with the instrumented system. Our results suggest that users appreciate seeing internal states, which helped them expose biased behavior and increased their sense of control. Participants also made valuable suggestions that point to future directions for both design and machine learning research. The project page and video demo of our TalkTuner system are available at https://bit.ly/talktuner-project-page

Oam Patel、Olivia Seow、Martin Wattenberg、Aoyu Wu、Jan Riecke、Nicholas Castillo Marin、Trevor DePodesta、Catherine Yeh、Fernanda Vi¨|gas、Kenneth Li、Shivam Raval、Yida Chen

信息传播、知识传播计算技术、计算机技术教育

Oam Patel,Olivia Seow,Martin Wattenberg,Aoyu Wu,Jan Riecke,Nicholas Castillo Marin,Trevor DePodesta,Catherine Yeh,Fernanda Vi¨|gas,Kenneth Li,Shivam Raval,Yida Chen.Designing a Dashboard for Transparency and Control of Conversational AI[EB/OL].(2024-06-12)[2025-05-16].https://arxiv.org/abs/2406.07882.点此复制

评论