|国家预印本平台
首页|Emotional RAG LLMs: Reading Comprehension for the Open Internet

Emotional RAG LLMs: Reading Comprehension for the Open Internet

Emotional RAG LLMs: Reading Comprehension for the Open Internet

来源:Arxiv_logoArxiv
英文摘要

Queries to large language models (LLMs) can be divided into two parts: the instruction/question and the accompanying context. The context for retrieval-augmented generation (RAG) systems in most benchmarks comes from Wikipedia-like texts written in a neutral and factual tone. However, real-world RAG applications often retrieve internet-based text with diverse tones and linguistic styles, posing challenges for downstream tasks. This paper introduces (a) a dataset that transforms RAG-retrieved passages into emotionally inflected and sarcastic text, (b) an emotion translation model for adapting text to different tones, and (c) a prompt-based method to improve LLMs' pragmatic interpretation of retrieved text.

Benjamin Reichman、Adar Avsian、Kartik Talamadupula、Toshish Jawale、Larry Heck

计算技术、计算机技术

Benjamin Reichman,Adar Avsian,Kartik Talamadupula,Toshish Jawale,Larry Heck.Emotional RAG LLMs: Reading Comprehension for the Open Internet[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2408.11189.点此复制

评论