|国家预印本平台
首页|A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output

A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output

A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output

来源:Arxiv_logoArxiv
英文摘要

The rapid adoption of Artificial Intelligence(AI) programming assistants such as GitHub Copilot introduces new challenges in how these software tools address human needs. Many existing evaluation frameworks address technical aspects such as code correctness and efficiency, but often overlook crucial human factors that affect the successful integration of AI assistants in software development workflows. In this study, I analyzed GitHub Copilot's interaction with users through its chat interface, measured Copilot's ability to adapt explanations and code generation to user expertise levels, and assessed its effectiveness in facilitating collaborative programming experiences. I established a human-centered requirements framework with clear metrics to evaluate these qualities in GitHub Copilot chat. I discussed the test results and their implications for future analysis of human requirements in automated programming.

Soroush Heydari

计算技术、计算机技术

Soroush Heydari.A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output[EB/OL].(2025-08-05)[2025-08-17].https://arxiv.org/abs/2508.03922.点此复制

评论