|国家预印本平台
首页|How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark

How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark

How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark

来源:Arxiv_logoArxiv
英文摘要

We introduce Grade School Math with Distracting Context (GSM-DC), a synthetic benchmark to evaluate Large Language Models' (LLMs) reasoning robustness against systematically controlled irrelevant context (IC). GSM-DC constructs symbolic reasoning graphs with precise distractor injections, enabling rigorous, reproducible evaluation. Our experiments demonstrate that LLMs are significantly sensitive to IC, affecting both reasoning path selection and arithmetic accuracy. Additionally, training models with strong distractors improves performance in both in-distribution and out-of-distribution scenarios. We further propose a stepwise tree search guided by a process reward model, which notably enhances robustness in out-of-distribution conditions.

Minglai Yang、Ethan Huang、Liang Zhang、Mihai Surdeanu、William Wang、Liangming Pan

计算技术、计算机技术

Minglai Yang,Ethan Huang,Liang Zhang,Mihai Surdeanu,William Wang,Liangming Pan.How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark[EB/OL].(2025-05-24)[2025-06-22].https://arxiv.org/abs/2505.18761.点此复制

评论