|国家预印本平台
首页|SWI: Speaking with Intent in Large Language Models

SWI: Speaking with Intent in Large Language Models

SWI: Speaking with Intent in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Intent, typically clearly formulated and planned, functions as a cognitive framework for communication and problem-solving. This paper introduces the concept of Speaking with Intent (SWI) in large language models (LLMs), where the explicitly generated intent encapsulates the model's underlying intention and provides high-level planning to guide subsequent analysis and action. By emulating deliberate and purposeful thoughts in the human mind, SWI is hypothesized to enhance the reasoning capabilities and generation quality of LLMs. Extensive experiments on text summarization, multi-task question answering, and mathematical reasoning benchmarks consistently demonstrate the effectiveness and generalizability of Speaking with Intent over direct generation without explicit intent. Further analysis corroborates the generalizability of SWI under different experimental settings. Moreover, human evaluations verify the coherence, effectiveness, and interpretability of the intent produced by SWI. The promising results in enhancing LLMs with explicit intents pave a new avenue for boosting LLMs' generation and reasoning abilities with cognitive notions.

Yuwei Yin、EunJeong Hwang、Giuseppe Carenini

计算技术、计算机技术

Yuwei Yin,EunJeong Hwang,Giuseppe Carenini.SWI: Speaking with Intent in Large Language Models[EB/OL].(2025-07-19)[2025-08-16].https://arxiv.org/abs/2503.21544.点此复制

评论