|国家预印本平台
首页|Large Language Models in Numberland: A Quick Test of Their Numerical Reasoning Abilities

Large Language Models in Numberland: A Quick Test of Their Numerical Reasoning Abilities

Large Language Models in Numberland: A Quick Test of Their Numerical Reasoning Abilities

来源:Arxiv_logoArxiv
英文摘要

An essential element of human mathematical reasoning is our number sense -- an abstract understanding of numbers and their relationships -- which allows us to solve problems involving vast number spaces using limited computational resources. Mathematical reasoning of Large Language Models (LLMs) is often tested on high-level problems (such as Olympiad challenges, geometry, word problems, and puzzles), but their low-level number sense remains less explored. We introduce "Numberland," a 100-problem test to evaluate the numerical reasoning abilities of LLM-based agents. The tasks -- basic operations, advanced calculations (e.g., exponentiation, complex numbers), prime number checks, and the 24 game -- aim to test elementary skills and their integration in solving complex and uncertain problems. We evaluated five LLM-based agents: OpenAI's o1 and o1-mini, Google Gemini, Microsoft Copilot, and Anthropic Claude. They scored 74-95% on the first three tasks that allow deterministic steps to solutions. In the 24 game, which needs trial-and-error search, performance dropped to 10-73%. We tested the top 24 solver (o1 with 73% accuracy) on 25 harder problems, and its score fell to 27%, confirming search as a bottleneck. These results, along with the types of mistakes, suggest a fragile number of LLMs, which is a bit surprising given their prowess in challenging benchmarks. The limits of LLM numerical reasoning highlight the scope of simple, targeted tests to evaluate and explain LLM math skills to ensure safe use.

Roussel Rahman

数学

Roussel Rahman.Large Language Models in Numberland: A Quick Test of Their Numerical Reasoning Abilities[EB/OL].(2025-03-31)[2025-05-05].https://arxiv.org/abs/2504.00226.点此复制

评论