EvalAssist: A Human-Centered Tool for LLM-as-a-Judge
EvalAssist: A Human-Centered Tool for LLM-as-a-Judge
With the broad availability of large language models and their ability to generate vast outputs using varied prompts and configurations, determining the best output for a given task requires an intensive evaluation process, one where machine learning practitioners must decide how to assess the outputs and then carefully carry out the evaluation. This process is both time-consuming and costly. As practitioners work with an increasing number of models, they must now evaluate outputs to determine which model and prompt performs best for a given task. LLMs are increasingly used as evaluators to filter training data, evaluate model performance, assess harms and risks, or assist human evaluators with detailed assessments. We present EvalAssist, a framework that simplifies the LLM-as-a-judge workflow. The system provides an online criteria development environment, where users can interactively build, test, and share custom evaluation criteria in a structured and portable format. We support a set of LLM-based evaluation pipelines that leverage off-the-shelf LLMs and use a prompt-chaining approach we developed and contributed to the UNITXT open-source library. Additionally, our system also includes specially trained evaluators to detect harms and risks in LLM outputs. We have deployed the system internally in our organization with several hundreds of users.
Zahra Ashktorab、Elizabeth M. Daly、Erik Miehling、Werner Geyer、Martin Santillan Cooper、Tejaswini Pedapati、Michael Desmond、Qian Pan、Hyo Jin Do
计算技术、计算机技术
Zahra Ashktorab,Elizabeth M. Daly,Erik Miehling,Werner Geyer,Martin Santillan Cooper,Tejaswini Pedapati,Michael Desmond,Qian Pan,Hyo Jin Do.EvalAssist: A Human-Centered Tool for LLM-as-a-Judge[EB/OL].(2025-07-02)[2025-07-21].https://arxiv.org/abs/2507.02186.点此复制
评论