Handling Symbolic Language in Student Texts: A Comparative Study of NLP Embedding Models
Handling Symbolic Language in Student Texts: A Comparative Study of NLP Embedding Models
Recent advancements in Natural Language Processing (NLP) have facilitated the analysis of student-generated language products in learning analytics (LA), particularly through the use of NLP embedding models. Yet when it comes to science-related language, symbolic expressions such as equations and formulas introduce challenges that current embedding models struggle to address. Existing studies and applications often either overlook these challenges or remove symbolic expressions altogether, potentially leading to biased findings and diminished performance of LA applications. This study therefore explores how contemporary embedding models differ in their capability to process and interpret science-related symbolic expressions. To this end, various embedding models are evaluated using physics-specific symbolic expressions drawn from authentic student responses, with performance assessed via two approaches: similarity-based analyses and integration into a machine learning pipeline. Our findings reveal significant differences in model performance, with OpenAI's GPT-text-embedding-3-large outperforming all other examined models, though its advantage over other models was moderate rather than decisive. Beyond performance, additional factors such as cost, regulatory compliance, and model transparency are discussed as key considerations for model selection. Overall, this study underscores the importance for LA researchers and practitioners of carefully selecting NLP embedding models when working with science-related language products that include symbolic expressions.
Tom Bleckmann、Paul Tschisgale
自然科学研究方法信息科学、信息技术
Tom Bleckmann,Paul Tschisgale.Handling Symbolic Language in Student Texts: A Comparative Study of NLP Embedding Models[EB/OL].(2025-05-23)[2025-06-07].https://arxiv.org/abs/2505.17950.点此复制
评论