Measuring and predicting variation in the difficulty of questions about data visualizations
Measuring and predicting variation in the difficulty of questions about data visualizations
Understanding what is communicated by data visualizations is a critical component of scientific literacy in the modern era. However, it remains unclear why some tasks involving data visualizations are more difficult than others. Here we administered a composite test composed of five widely used tests of data visualization literacy to a large sample of U.S. adults (N=503 participants).We found that items in the composite test spanned the full range of possible difficulty levels, and that our estimates of item-level difficulty were highly reliable. However, the type of data visualization shown and the type of task involved only explained a modest amount of variation in performance across items, relative to the reliability of the estimates we obtained. These results highlight the need for finer-grained ways of characterizing these items that predict the reliable variation in difficulty measured in this study, and that generalize to other tests of data visualization understanding.
Arnav Verma、Judith E. Fan
计算技术、计算机技术
Arnav Verma,Judith E. Fan.Measuring and predicting variation in the difficulty of questions about data visualizations[EB/OL].(2025-05-12)[2025-07-03].https://arxiv.org/abs/2505.08031.点此复制
评论