|国家预印本平台
首页|Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models

Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models

Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Recent studies have shown that large language models (LLMs), when customized with post-training on tabular data, can acquire general tabular in-context learning (TabICL) capabilities. These models are able to transfer effectively across diverse data schemas and different task domains. However, existing LLM-based TabICL approaches are constrained to few-shot scenarios due to the sequence length limitations of LLMs, as tabular instances represented in plain text consume substantial tokens. To address this limitation and enable scalable TabICL for any data size, we propose retrieval-augmented LLMs tailored to tabular data. Our approach incorporates a customized retrieval module, combined with retrieval-guided instruction-tuning for LLMs. This enables LLMs to effectively leverage larger datasets, achieving significantly improved performance across 69 widely recognized datasets and demonstrating promising scaling behavior. Extensive comparisons with state-of-the-art tabular models reveal that, while LLM-based TabICL still lags behind well-tuned numeric models in overall performance, it uncovers powerful algorithms under limited contexts, enhances ensemble diversity, and excels on specific datasets. These unique properties underscore the potential of language as a universal and accessible interface for scalable tabular data learning.

Xumeng Wen、Shun Zheng、Zhen Xu、Jiang Bian、Yiming Sun

计算技术、计算机技术

Xumeng Wen,Shun Zheng,Zhen Xu,Jiang Bian,Yiming Sun.Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models[EB/OL].(2025-02-05)[2025-08-02].https://arxiv.org/abs/2502.03147.点此复制

评论