|国家预印本平台
首页|Comparing human and LLM politeness strategies in free production

Comparing human and LLM politeness strategies in free production

Comparing human and LLM politeness strategies in free production

来源:Arxiv_logoArxiv
英文摘要

Polite speech poses a fundamental alignment challenge for large language models (LLMs). Humans deploy a rich repertoire of linguistic strategies to balance informational and social goals -- from positive approaches that build rapport (compliments, expressions of interest) to negative strategies that minimize imposition (hedging, indirectness). We investigate whether LLMs employ a similarly context-sensitive repertoire by comparing human and LLM responses in both constrained and open-ended production tasks. We find that larger models ($\ge$70B parameters) successfully replicate key preferences from the computational pragmatics literature, and human evaluators surprisingly prefer LLM-generated responses in open-ended contexts. However, further linguistic analyses reveal that models disproportionately rely on negative politeness strategies even in positive contexts, potentially leading to misinterpretations. While modern LLMs demonstrate an impressive handle on politeness strategies, these subtle differences raise important questions about pragmatic alignment in AI systems.

Haoran Zhao、Robert D. Hawkins

语言学

Haoran Zhao,Robert D. Hawkins.Comparing human and LLM politeness strategies in free production[EB/OL].(2025-06-11)[2025-07-19].https://arxiv.org/abs/2506.09391.点此复制

评论