|国家预印本平台
首页|Exploring Explanations Improves the Robustness of In-Context Learning

Exploring Explanations Improves the Robustness of In-Context Learning

Exploring Explanations Improves the Robustness of In-Context Learning

来源:Arxiv_logoArxiv
英文摘要

In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs). However, it often struggles to generalize beyond the distribution of the provided demonstrations. A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels. Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X$^2$-ICL), thereby enabling more comprehensive and robust decision-making. Experimental results on multiple natural language understanding datasets validate the effectiveness of X$^2$-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.

Ukyo Honda、Tatsushi Oka

语言学

Ukyo Honda,Tatsushi Oka.Exploring Explanations Improves the Robustness of In-Context Learning[EB/OL].(2025-06-02)[2025-06-30].https://arxiv.org/abs/2506.02378.点此复制

评论