|国家预印本平台
首页|LLMs model how humans induce logically structured rules

LLMs model how humans induce logically structured rules

LLMs model how humans induce logically structured rules

来源:Arxiv_logoArxiv
英文摘要

A central goal of cognitive science is to provide a computationally explicit account of both the structure of the mind and its development: what are the primitive representational building blocks of cognition, what are the rules via which those primitives combine, and where do these primitives and rules come from in the first place? A long-standing debate concerns the adequacy of artificial neural networks as computational models that can answer these questions, in particular in domains related to abstract cognitive function, such as language and logic. This paper argues that recent advances in neural networks -- specifically, the advent of large language models (LLMs) -- represent an important shift in this debate. We test a variety of LLMs on an existing experimental paradigm used for studying the induction of rules formulated over logical concepts. Across four experiments, we find converging empirical evidence that LLMs provide at least as good a fit to human behavior as models that implement a Bayesian probablistic language of thought (pLoT), which have been the best computational models of human behavior on the same task. Moreover, we show that the LLMs make qualitatively different predictions about the nature of the rules that are inferred and deployed in order to complete the task, indicating that the LLM is unlikely to be a mere implementation of the pLoT solution. Based on these results, we argue that LLMs may instantiate a novel theoretical account of the primitive representations and computations necessary to explain human logical concepts, with which future work in cognitive science should engage.

Alyssa Loo、Ellie Pavlick、Roman Feiman

计算技术、计算机技术语言学

Alyssa Loo,Ellie Pavlick,Roman Feiman.LLMs model how humans induce logically structured rules[EB/OL].(2025-07-05)[2025-07-16].https://arxiv.org/abs/2507.03876.点此复制

评论