Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules
Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules
Human-AI conversation frequently relies on quoting earlier text-"check it with the formula I just highlighted"-yet today's large language models (LLMs) lack an explicit mechanism for locating and exploiting such spans. We formalise the challenge as span-conditioned generation, decomposing each turn into the dialogue history, a set of token-offset quotation spans, and an intent utterance. Building on this abstraction, we introduce a quotation-centric data pipeline that automatically synthesises task-specific dialogues, verifies answer correctness through multi-stage consistency checks, and yields both a heterogeneous training corpus and the first benchmark covering five representative scenarios. To meet the benchmark's zero-overhead and parameter-efficiency requirements, we propose QuAda, a lightweight training-based method that attaches two bottleneck projections to every attention head, dynamically amplifying or suppressing attention to quoted spans at inference time while leaving the prompt unchanged and updating < 2.8% of backbone weights. Experiments across models show that QuAda is suitable for all scenarios and generalises to unseen topics, offering an effective, plug-and-play solution for quotation-aware dialogue.
Yueqi Zhang、Peiwen Yuan、Shaoxiong Feng、Yiwei Li、Xinglin Wang、Jiayi Shi、Chuyi Tan、Boyuan Pan、Yao Hu、Kan Li
语言学计算技术、计算机技术
Yueqi Zhang,Peiwen Yuan,Shaoxiong Feng,Yiwei Li,Xinglin Wang,Jiayi Shi,Chuyi Tan,Boyuan Pan,Yao Hu,Kan Li.Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules[EB/OL].(2025-05-30)[2025-07-16].https://arxiv.org/abs/2505.24292.点此复制
评论