|国家预印本平台
首页|The Failure of Plagiarism Detection in Competitive Programming

The Failure of Plagiarism Detection in Competitive Programming

The Failure of Plagiarism Detection in Competitive Programming

来源:Arxiv_logoArxiv
英文摘要

Plagiarism in programming courses remains a persistent challenge, especially in competitive programming contexts where assignments often have unique, known solutions. This paper examines why traditional code plagiarism detection methods frequently fail in these environments and explores the implications of emerging factors such as generative AI (genAI). Drawing on the author's experience teaching a Competitive Programming 1 (CP1) course over seven semesters at Purdue University (with $\approx 100$ students each term) and completely redesigning the CP1/2/3 course sequence, we provide an academically grounded analysis. We review literature on code plagiarism in computer science education, survey current detection tools (Moss, Kattis, etc.) and methods (manual review, code-authorship interviews), and analyze their strengths and limitations. Experience-based observations are presented to illustrate real-world detection failures and successes. We find that widely-used automated similarity checkers can be thwarted by simple code transformations or novel AI-generated code, while human-centric approaches like oral interviews, though effective, are labor-intensive. The paper concludes with opinions and preliminary recommendations for improving academic integrity in programming courses, advocating for a multi-faceted approach that combines improved detection algorithms, mastery-based learning techniques, and authentic assessment practices to better ensure code originality.

Ethan Dickey

教育信息传播、知识传播

Ethan Dickey.The Failure of Plagiarism Detection in Competitive Programming[EB/OL].(2025-05-13)[2025-07-16].https://arxiv.org/abs/2505.08244.点此复制

评论