Psychometric Item Validation Using Virtual Respondents with Trait-Response Mediators
Psychometric Item Validation Using Virtual Respondents with Trait-Response Mediators
As psychometric surveys are increasingly used to assess the traits of large language models (LLMs), the need for scalable survey item generation suited for LLMs has also grown. A critical challenge here is ensuring the construct validity of generated items, i.e., whether they truly measure the intended trait. Traditionally, this requires costly, large-scale human data collection. To make it efficient, we present a framework for virtual respondent simulation using LLMs. Our central idea is to account for mediators: factors through which the same trait can give rise to varying responses to a survey item. By simulating respondents with diverse mediators, we identify survey items that robustly measure intended traits. Experiments on three psychological trait theories (Big5, Schwartz, VIA) show that our mediator generation methods and simulation framework effectively identify high-validity items. LLMs demonstrate the ability to generate plausible mediators from trait definitions and to simulate respondent behavior for item validation. Our problem formulation, metrics, methodology, and dataset open a new direction for cost-effective survey development and a deeper understanding of how LLMs replicate human-like behavior. We will publicly release our dataset and code to support future work.
Sungjib Lim、Woojung Song、Eun-Ju Lee、Yohan Jo
计算技术、计算机技术
Sungjib Lim,Woojung Song,Eun-Ju Lee,Yohan Jo.Psychometric Item Validation Using Virtual Respondents with Trait-Response Mediators[EB/OL].(2025-07-08)[2025-07-17].https://arxiv.org/abs/2507.05890.点此复制
评论