|国家预印本平台
首页|Benchmarking Transferability: A Framework for Fair and Robust Evaluation

Benchmarking Transferability: A Framework for Fair and Robust Evaluation

Benchmarking Transferability: A Framework for Fair and Robust Evaluation

来源:Arxiv_logoArxiv
英文摘要

Transferability scores aim to quantify how well a model trained on one domain generalizes to a target domain. Despite numerous methods proposed for measuring transferability, their reliability and practical usefulness remain inconclusive, often due to differing experimental setups, datasets, and assumptions. In this paper, we introduce a comprehensive benchmarking framework designed to systematically evaluate transferability scores across diverse settings. Through extensive experiments, we observe variations in how different metrics perform under various scenarios, suggesting that current evaluation practices may not fully capture each method's strengths and limitations. Our findings underscore the value of standardized assessment protocols, paving the way for more reliable transferability measures and better-informed model selection in cross-domain applications. Additionally, we achieved a 3.5\% improvement using our proposed metric for the head-training fine-tuning experimental setup. Our code is available in this repository: https://github.com/alizkzm/pert_robust_platform.

Alireza Kazemi、Helia Rezvani、Mahsa Baktashmotlagh

计算技术、计算机技术

Alireza Kazemi,Helia Rezvani,Mahsa Baktashmotlagh.Benchmarking Transferability: A Framework for Fair and Robust Evaluation[EB/OL].(2025-04-28)[2025-05-28].https://arxiv.org/abs/2504.20121.点此复制

评论