LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios
LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios
As Large Language Models (LLMs) evolve in natural language processing (NLP), their ability to stably follow instructions in long-context inputs has become critical for real-world applications. However, existing benchmarks seldom focus on instruction-following in long-context scenarios or stability on different inputs. To bridge this gap, we introduce LIFBench, a scalable dataset designed to evaluate LLMs' instruction-following capabilities and stability across long contexts. LIFBench comprises three long-context scenarios and eleven diverse tasks, featuring 2,766 instructions generated through an automated expansion method across three dimensions: length, expression, and variables. For evaluation, we propose LIFEval, a rubric-based assessment method that enables precise, automated scoring of complex LLM responses without reliance on LLM-assisted assessments or human judgment. This method allows for a comprehensive analysis of model performance and stability from multiple perspectives. We conduct detailed experiments on 20 prominent LLMs across six length intervals. Our work contributes LIFBench and LIFEval as robust tools for assessing LLM performance in complex and long-context settings, offering valuable insights to guide future advancements in LLM development.
Minhao Wang、He Yan、Yichen Liu、Xiaoming Shi、Xiangju Lu、Xiaodong Wu、Junmin Zhu、Wei Zhang
计算技术、计算机技术
Minhao Wang,He Yan,Yichen Liu,Xiaoming Shi,Xiangju Lu,Xiaodong Wu,Junmin Zhu,Wei Zhang.LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios[EB/OL].(2025-07-23)[2025-08-09].https://arxiv.org/abs/2411.07037.点此复制
评论