|国家预印本平台
首页|Frozen Layers: Memory-efficient Many-fidelity Hyperparameter Optimization

Frozen Layers: Memory-efficient Many-fidelity Hyperparameter Optimization

Frozen Layers: Memory-efficient Many-fidelity Hyperparameter Optimization

来源:Arxiv_logoArxiv
英文摘要

As model sizes grow, finding efficient and cost-effective hyperparameter optimization (HPO) methods becomes increasingly crucial for deep learning pipelines. While multi-fidelity HPO (MF-HPO) trades off computational resources required for DL training with lower fidelity estimations, existing fidelity sources often fail under lower compute and memory constraints. We propose a novel fidelity source: the number of layers that are trained or frozen during training. For deep networks, this approach offers significant compute and memory savings while preserving rank correlations between hyperparameters at low fidelities compared to full model training. We demonstrate this in our empirical evaluation across ResNets and Transformers and additionally analyze the utility of frozen layers as a fidelity in using GPU resources as a fidelity in HPO, and for a combined MF-HPO with other fidelity sources. This contribution opens new applications for MF-HPO with hardware resources as a fidelity and creates opportunities for improved algorithms navigating joint fidelity spaces.

Timur Carstensen、Neeratyoy Mallik、Frank Hutter、Martin Rapp

计算技术、计算机技术

Timur Carstensen,Neeratyoy Mallik,Frank Hutter,Martin Rapp.Frozen Layers: Memory-efficient Many-fidelity Hyperparameter Optimization[EB/OL].(2025-04-14)[2025-06-24].https://arxiv.org/abs/2504.10735.点此复制

评论