Why Cannot Large Language Models Ever Make True Correct Reasoning?
Why Cannot Large Language Models Ever Make True Correct Reasoning?
Recently, with the application progress of AIGC tools based on large language models (LLMs), led by ChatGPT, many AI experts and more non-professionals are trumpeting the "reasoning ability" of the LLMs. The present author considers that the so-called "reasoning ability" of LLMs are just illusions of those people who with vague concepts. In fact, the LLMs can never have the true reasoning ability. This paper intents to explain that, because the essential limitations of their working principle, the LLMs can never have the ability of true correct reasoning.
Jingde Cheng
计算技术、计算机技术
Jingde Cheng.Why Cannot Large Language Models Ever Make True Correct Reasoning?[EB/OL].(2025-08-16)[2025-08-24].https://arxiv.org/abs/2508.10265.点此复制
评论