Do Language Model Agents Align with Humans in Rating Visualizations? An Empirical Study
Do Language Model Agents Align with Humans in Rating Visualizations? An Empirical Study
Large language models encode knowledge in various domains and demonstrate the ability to understand visualizations. They may also capture visualization design knowledge and potentially help reduce the cost of formative studies. However, it remains a question whether large language models are capable of predicting human feedback on visualizations. To investigate this question, we conducted three studies to examine whether large model-based agents can simulate human ratings in visualization tasks. The first study, replicating a published study involving human subjects, shows agents are promising in conducting human-like reasoning and rating, and its result guides the subsequent experimental design. The second study repeated six human-subject studies reported in literature on subjective ratings, but replacing human participants with agents. Consulting with five human experts, this study demonstrates that the alignment of agent ratings with human ratings positively correlates with the confidence levels of the experts before the experiments. The third study tests commonly used techniques for enhancing agents, including preprocessing visual and textual inputs, and knowledge injection. The results reveal the issues of these techniques in robustness and potential induction of biases. The three studies indicate that language model-based agents can potentially simulate human ratings in visualization experiments, provided that they are guided by high-confidence hypotheses from expert evaluators. Additionally, we demonstrate the usage scenario of swiftly evaluating prototypes with agents. We discuss insights and future directions for evaluating and improving the alignment of agent ratings with human ratings. We note that simulation may only serve as complements and cannot replace user studies.
Zekai Shao、Yi Shan、Yixuan He、Yuxuan Yao、Junhong Wang、Xiaolong、Zhang、Yu Zhang、Siming Chen
LukeLukeLukeLukeLukeLuke
计算技术、计算机技术
Zekai Shao,Yi Shan,Yixuan He,Yuxuan Yao,Junhong Wang,Xiaolong,Zhang,Yu Zhang,Siming Chen.Do Language Model Agents Align with Humans in Rating Visualizations? An Empirical Study[EB/OL].(2025-05-10)[2025-06-08].https://arxiv.org/abs/2505.06702.点此复制
评论