|国家预印本平台
首页|Towards Fair In-Context Learning with Tabular Foundation Models

Towards Fair In-Context Learning with Tabular Foundation Models

Towards Fair In-Context Learning with Tabular Foundation Models

来源:Arxiv_logoArxiv
英文摘要

Tabular foundational models have exhibited strong in-context learning (ICL) capabilities on structured data, allowing them to make accurate predictions on test sets without parameter updates, using training examples as context. This emerging approach positions itself as a competitive alternative to traditional gradient-boosted tree methods. However, while biases in conventional machine learning models are well documented, it remains unclear how these biases manifest in tabular ICL. The paper investigates the fairness implications of tabular ICL and explores three preprocessing strategies--correlation removal, group-balanced demonstration selection, and uncertainty-based demonstration selection--to address bias. Comprehensive experiments indicate that uncertainty-based demonstration selection consistently enhances group fairness of in-context predictions. The source code for reproducing the results of this work can be found at https://github.com/patrikken/Fair-TabICL.

Patrik Kenfack、Samira Ebrahimi Kahou、Ulrich A?vodji

计算技术、计算机技术

Patrik Kenfack,Samira Ebrahimi Kahou,Ulrich A?vodji.Towards Fair In-Context Learning with Tabular Foundation Models[EB/OL].(2025-05-14)[2025-06-04].https://arxiv.org/abs/2505.09503.点此复制

评论