|国家预印本平台
首页|Students' Perceptions to a Large Language Model's Generated Feedback and Scores of Argumentation Essays

Students' Perceptions to a Large Language Model's Generated Feedback and Scores of Argumentation Essays

Students' Perceptions to a Large Language Model's Generated Feedback and Scores of Argumentation Essays

来源:Arxiv_logoArxiv
英文摘要

Students in introductory physics courses often rely on ineffective strategies, focusing on final answers rather than understanding underlying principles. Integrating scientific argumentation into problem-solving fosters critical thinking and links conceptual knowledge with practical application. By facilitating learners to articulate their scientific arguments for solving problems, and by providing real-time feedback on students' strategies, we aim to enable students to develop superior problem-solving skills. Providing timely, individualized feedback to students in large-enrollment physics courses remains a challenge. Recent advances in Artificial Intelligence (AI) offer promising solutions. This study investigates the potential of AI-generated feedback on students' written scientific arguments in an introductory physics class. Using Open AI's GPT-4o, we provided delayed feedback on student written scientific arguments and surveyed them about the perceived usefulness and accuracy of this feedback. Our findings offer insights into the viability of implementing real-time AI feedback to enhance students' problem-solving and metacognitive skills in large-enrollment classrooms.

Winter Allen、Anand Shanker、N. Sanjay Rebello

教育计算技术、计算机技术

Winter Allen,Anand Shanker,N. Sanjay Rebello.Students' Perceptions to a Large Language Model's Generated Feedback and Scores of Argumentation Essays[EB/OL].(2025-08-20)[2025-09-02].https://arxiv.org/abs/2508.14759.点此复制

评论