|国家预印本平台
首页|Why LLMs Cannot Think and How to Fix It

Why LLMs Cannot Think and How to Fix It

Why LLMs Cannot Think and How to Fix It

来源:Arxiv_logoArxiv
英文摘要

This paper elucidates that current state-of-the-art Large Language Models (LLMs) are fundamentally incapable of making decisions or developing "thoughts" within the feature space due to their architectural constraints. We establish a definition of "thought" that encompasses traditional understandings of that term and adapt it for application to LLMs. We demonstrate that the architectural design and language modeling training methodology of contemporary LLMs inherently preclude them from engaging in genuine thought processes. Our primary focus is on this theoretical realization rather than practical insights derived from experimental data. Finally, we propose solutions to enable thought processes within the feature space and discuss the broader implications of these architectural modifications.

Marius Jahrens、Thomas Martinetz

计算技术、计算机技术

Marius Jahrens,Thomas Martinetz.Why LLMs Cannot Think and How to Fix It[EB/OL].(2025-03-12)[2025-04-27].https://arxiv.org/abs/2503.09211.点此复制

评论