|国家预印本平台
首页|Role-Playing Evaluation for Large Language Models

Role-Playing Evaluation for Large Language Models

Role-Playing Evaluation for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) demonstrate a notable capacity for adopting personas and engaging in role-playing. However, evaluating this ability presents significant challenges, as human assessments are resource-intensive and automated evaluations can be biased. To address this, we introduce Role-Playing Eval (RPEval), a novel benchmark designed to assess LLM role-playing capabilities across four key dimensions: emotional understanding, decision-making, moral alignment, and in-character consistency. This article details the construction of RPEval and presents baseline evaluations. Our code and dataset are available at https://github.com/yelboudouri/RPEval

Yassine El Boudouri、Walter Nuninger、Julian Alvarez、Yvan Peter

计算技术、计算机技术

Yassine El Boudouri,Walter Nuninger,Julian Alvarez,Yvan Peter.Role-Playing Evaluation for Large Language Models[EB/OL].(2025-05-19)[2025-06-04].https://arxiv.org/abs/2505.13157.点此复制

评论