Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation
Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation
The array of emergent capabilities of large language models (LLMs) has sparked interest in assessing their ability to simulate human opinions in a variety of contexts, potentially serving as surrogates for human subjects in opinion surveys. However, previous evaluations of this capability have depended heavily on costly, domain-specific human survey data, and mixed empirical results about LLM effectiveness create uncertainty for managers about whether investing in this technology is justified in early-stage research. To address these challenges, we introduce a series of quality checks to support early-stage deliberation about the viability of using LLMs for simulating human opinions. These checks emphasize logical constraints, model stability, and alignment with stakeholder expectations of model outputs, thereby reducing dependence on human-generated data in the initial stages of evaluation. We demonstrate the usefulness of the proposed quality control tests in the context of AI-assisted content moderation, an application that both advocates and critics of LLMs' capabilities to simulate human opinion see as a desirable potential use case. None of the tested models passed all quality control checks, revealing several failure modes. We conclude by discussing implications of these failure modes and recommend how organizations can utilize our proposed tests for prompt engineering and in their risk management practices when considering the use of LLMs for opinion simulation. We make our crowdsourced dataset of claims with human and LLM annotations publicly available for future research.
Terrence Neumann、Maria De-Arteaga、Sina Fazelpour
计算技术、计算机技术
Terrence Neumann,Maria De-Arteaga,Sina Fazelpour.Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation[EB/OL].(2025-04-11)[2025-04-27].https://arxiv.org/abs/2504.08954.点此复制
评论