|国家预印本平台
首页|Can structural correspondences ground real world representational content in Large Language Models?

Can structural correspondences ground real world representational content in Large Language Models?

Can structural correspondences ground real world representational content in Large Language Models?

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) such as GPT-4 produce compelling responses to a wide range of prompts. But their representational capacities are uncertain. Many LLMs have no direct contact with extra-linguistic reality: their inputs, outputs and training data consist solely of text, raising the questions (1) can LLMs represent anything and (2) if so, what? In this paper, I explore what it would take to answer these questions according to a structural-correspondence based account of representation, and make an initial survey of this evidence. I argue that the mere existence of structural correspondences between LLMs and worldly entities is insufficient to ground representation of those entities. However, if these structural correspondences play an appropriate role - they are exploited in a way that explains successful task performance - then they could ground real world contents. This requires overcoming a challenge: the text-boundedness of LLMs appears, on the face of it, to prevent them engaging in the right sorts of tasks.

Iwan Williams

计算技术、计算机技术

Iwan Williams.Can structural correspondences ground real world representational content in Large Language Models?[EB/OL].(2025-06-19)[2025-07-21].https://arxiv.org/abs/2506.16370.点此复制

评论