DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance
DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance
LLMs often excel on standard benchmarks but falter on real-world tasks. We introduce DeepQuestion, a scalable automated framework that augments existing datasets based on Bloom's taxonomy and creates novel questions that trace original solution paths to probe evaluative and creative skills. Extensive experiments across ten open-source and proprietary models, covering both general-purpose and reasoning LLMs, reveal substantial performance drops (even up to 70% accuracy loss) on higher-order tasks, underscoring persistent gaps in deep reasoning. Our work highlights the need for cognitively diverse benchmarks to advance LLM progress. DeepQuestion and related datasets will be released upon acceptance of the paper.
Ali Khoramfar、Ali Ramezani、Mohammad Mahdi Mohajeri、Mohammad Javad Dousti、Majid Nili Ahmadabadi、Heshaam Faili
计算技术、计算机技术
Ali Khoramfar,Ali Ramezani,Mohammad Mahdi Mohajeri,Mohammad Javad Dousti,Majid Nili Ahmadabadi,Heshaam Faili.DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance[EB/OL].(2025-05-30)[2025-07-19].https://arxiv.org/abs/2505.24532.点此复制
评论