|国家预印本平台
首页|Instruction-Following Evaluation for Large Language Models

Instruction-Following Evaluation for Large Language Models

Instruction-Following Evaluation for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. Our code and data can be found at https://github.com/google-research/google-research/tree/master/instruction_following_eval

Swaroop Mishra、Tianjian Lu、Denny Zhou、Yi Luan、Sujoy Basu、Le Hou、Siddhartha Brahma、Jeffrey Zhou

计算技术、计算机技术

Swaroop Mishra,Tianjian Lu,Denny Zhou,Yi Luan,Sujoy Basu,Le Hou,Siddhartha Brahma,Jeffrey Zhou.Instruction-Following Evaluation for Large Language Models[EB/OL].(2023-11-14)[2025-08-02].https://arxiv.org/abs/2311.07911.点此复制

评论