|国家预印本平台
首页|Understanding Chain-of-Thought in LLMs through Information Theory

Understanding Chain-of-Thought in LLMs through Information Theory

Understanding Chain-of-Thought in LLMs through Information Theory

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have shown impressive performance in complex reasoning tasks through the use of Chain-of-Thought (CoT) reasoning, allowing models to break down problems into manageable sub-tasks. However, existing CoT evaluation techniques either require annotated CoT data or fall short in accurately assessing intermediate reasoning steps, leading to high rates of false positives. In this paper, we formalize CoT reasoning in LLMs through an information-theoretic lens. Specifically, our framework quantifies the `information-gain' at each reasoning step, enabling the identification of failure modes in LLMs without the need for expensive annotated datasets. We demonstrate the efficacy of our approach through extensive experiments on toy arithmetic, GSM8K and PRM800k datasets, where it significantly outperforms existing outcome-based methods by providing more accurate insights into model performance on individual subtasks.

Jean-Francois Ton、Yang Liu、Muhammad Faaiz Taufiq

计算技术、计算机技术

Jean-Francois Ton,Yang Liu,Muhammad Faaiz Taufiq.Understanding Chain-of-Thought in LLMs through Information Theory[EB/OL].(2025-07-10)[2025-07-18].https://arxiv.org/abs/2411.11984.点此复制

评论