|国家预印本平台
首页|Theory of Mind in Large Language Models: Assessment and Enhancement

Theory of Mind in Large Language Models: Assessment and Enhancement

Theory of Mind in Large Language Models: Assessment and Enhancement

来源:Arxiv_logoArxiv
英文摘要

Theory of Mind (ToM)-the ability to infer and reason about others' mental states-is fundamental to human social intelligence. As Large Language Models (LLMs) become increasingly integrated into daily life, it is crucial to assess and enhance their capacity to interpret and respond to human mental states. In this paper, we review LLMs' ToM capabilities by examining both evaluation benchmarks and the strategies designed to improve them. We focus on widely adopted story-based benchmarks and provide an in-depth analysis of methods aimed at enhancing ToM in LLMs. Furthermore, we outline promising future research directions informed by recent benchmarks and state-of-the-art approaches. Our survey serves as a valuable resource for researchers interested in advancing LLMs' ToM capabilities.

计算技术、计算机技术

.Theory of Mind in Large Language Models: Assessment and Enhancement[EB/OL].(2025-04-26)[2025-05-17].https://arxiv.org/abs/2505.00026.点此复制

评论