|国家预印本平台
首页|Challenges for AI in Multimodal STEM Assessments: a Human-AI Comparison

Challenges for AI in Multimodal STEM Assessments: a Human-AI Comparison

Challenges for AI in Multimodal STEM Assessments: a Human-AI Comparison

来源:Arxiv_logoArxiv
英文摘要

Generative AI systems have rapidly advanced, with multimodal input capabilities enabling reasoning beyond text-based tasks. In education, these advancements could influence assessment design and question answering, presenting both opportunities and challenges. To investigate these effects, we introduce a high-quality dataset of 201 university-level STEM questions, manually annotated with features such as image type, role, problem complexity, and question format. Our study analyzes how these features affect generative AI performance compared to students. We evaluate four model families with five prompting strategies, comparing results to the average of 546 student responses per question. Although the best model correctly answers on average 58.5 % of the questions using majority vote aggregation, human participants consistently outperform AI on questions involving visual components. Interestingly, human performance remains stable across question features but varies by subject, whereas AI performance is susceptible to both subject matter and question features. Finally, we provide actionable insights for educators, demonstrating how question design can enhance academic integrity by leveraging features that challenge current AI systems without increasing the cognitive burden for students.

Aymeric de Chillaz、Anna Sotnikova、Patrick Jermann、Antoine Bosselut

教育计算技术、计算机技术

Aymeric de Chillaz,Anna Sotnikova,Patrick Jermann,Antoine Bosselut.Challenges for AI in Multimodal STEM Assessments: a Human-AI Comparison[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2507.03013.点此复制

评论