Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation
Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation
Our study introduces an automated system leveraging large language models (LLMs) to assess the effectiveness of five key tutoring strategies: 1. giving effective praise, 2. reacting to errors, 3. determining what students know, 4. helping students manage inequity, and 5. responding to negative self-talk. Using a public dataset from the Teacher-Student Chatroom Corpus, our system classifies each tutoring strategy as either being employed as desired or undesired. Our study utilizes GPT-3.5 with few-shot prompting to assess the use of these strategies and analyze tutoring dialogues. The results show that for the five tutoring strategies, True Negative Rates (TNR) range from 0.655 to 0.738, and Recall ranges from 0.327 to 0.432, indicating that the model is effective at excluding incorrect classifications but struggles to consistently identify the correct strategy. The strategy \textit{helping students manage inequity} showed the highest performance with a TNR of 0.738 and Recall of 0.432. The study highlights the potential of LLMs in tutoring strategy analysis and outlines directions for future improvements, including incorporating more advanced models for more nuanced feedback.
Megan Gu、Chloe Qianhui Zhao、Claire Liu、Nikhil Patel、Jahnvi Shah、Jionghao Lin、Kenneth R. Koedinger
计算技术、计算机技术
Megan Gu,Chloe Qianhui Zhao,Claire Liu,Nikhil Patel,Jahnvi Shah,Jionghao Lin,Kenneth R. Koedinger.Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation[EB/OL].(2025-04-03)[2025-05-23].https://arxiv.org/abs/2504.13882.点此复制
评论