|国家预印本平台
首页|Exploring MLLMs Perception of Network Visualization Principles

Exploring MLLMs Perception of Network Visualization Principles

Exploring MLLMs Perception of Network Visualization Principles

来源:Arxiv_logoArxiv
英文摘要

In this paper, we test whether Multimodal Large Language Models (MLLMs) can match human-subject performance in tasks involving the perception of properties in network layouts. Specifically, we replicate a human-subject experiment about perceiving quality (namely stress) in network layouts using GPT-4o and Gemini-2.5. Our experiments show that giving MLLMs exactly the same study information as trained human participants results in a similar performance to human experts and exceeds the performance of untrained non-experts. Additionally, we show that prompt engineering that deviates from the human-subject experiment can lead to better-than-human performance in some settings. Interestingly, like human subjects, the MLLMs seem to rely on visual proxies rather than computing the actual value of stress, indicating some sense or facsimile of perception. Explanations from the models provide descriptions similar to those used by the human participants (e.g., even distribution of nodes and uniform edge lengths).

Jacob Miller、Markus Wallinger、Ludwig Felder、Timo Brand、Henry F?rster、Johannes Zink、Chunyang Chen、Stephen Kobourov

计算技术、计算机技术

Jacob Miller,Markus Wallinger,Ludwig Felder,Timo Brand,Henry F?rster,Johannes Zink,Chunyang Chen,Stephen Kobourov.Exploring MLLMs Perception of Network Visualization Principles[EB/OL].(2025-06-17)[2025-06-29].https://arxiv.org/abs/2506.14611.点此复制

评论