Enhancing the Learning Experience: Using Vision-Language Models to Generate Questions for Educational Videos
Enhancing the Learning Experience: Using Vision-Language Models to Generate Questions for Educational Videos
Web-based educational videos offer flexible learning opportunities and are becoming increasingly popular. However, improving user engagement and knowledge retention remains a challenge. Automatically generated questions can activate learners and support their knowledge acquisition. Further, they can help teachers and learners assess their understanding. While large language and vision-language models have been employed in various tasks, their application to question generation for educational videos remains underexplored. In this paper, we investigate the capabilities of current vision-language models for generating learning-oriented questions for educational video content. We assess (1) out-of-the-box models' performance; (2) fine-tuning effects on content-specific question generation; (3) the impact of different video modalities on question quality; and (4) in a qualitative study, question relevance, answerability, and difficulty levels of generated questions. Our findings delineate the capabilities of current vision-language models, highlighting the need for fine-tuning and addressing challenges in question diversity and relevance. We identify requirements for future multimodal datasets and outline promising research directions.
Markos Stamatakis、Ralph Ewerth、Anett Hoppe、Joshua Berger、Christian Wartena
教育计算技术、计算机技术
Markos Stamatakis,Ralph Ewerth,Anett Hoppe,Joshua Berger,Christian Wartena.Enhancing the Learning Experience: Using Vision-Language Models to Generate Questions for Educational Videos[EB/OL].(2025-05-03)[2025-06-04].https://arxiv.org/abs/2505.01790.点此复制
评论