Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning?
Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning?
Optimal hyperparameter selection is critical for maximizing neural network performance, especially as models grow in complexity. This work investigates the viability of leveraging large language models (LLMs) for hyperparameter optimization by fine-tuning a parameter-efficient version of Code Llama using LoRA. The adapted LLM is capable of generating accurate and efficient hyperparameter recommendations tailored to diverse neural network architectures. Unlike traditional approaches such as Optuna, which rely on computationally intensive trial-and-error procedures, our method achieves competitive or superior results in terms of Root Mean Square Error (RMSE) while significantly reducing computational overhead. Our findings demonstrate that LLM-based optimization not only matches the performance of state-of-the-art techniques like Tree-structured Parzen Estimators (TPE) but also substantially accelerates the tuning process. This positions LLMs as a promising alternative for rapid experimentation, particularly in resource-constrained environments such as edge devices and mobile platforms, where computational efficiency is essential. In addition to improved efficiency, the method offers time savings and consistent performance across various tasks, highlighting its robustness and generalizability. All generated hyperparameters are included in the LEMUR Neural Network (NN) Dataset, which is publicly available and serves as an open-source benchmark for hyperparameter optimization research.
Roman Kochnev、Arash Torabi Goodarzi、Zofia Antonina Bentyn、Dmitry Ignatov、Radu Timofte
计算技术、计算机技术
Roman Kochnev,Arash Torabi Goodarzi,Zofia Antonina Bentyn,Dmitry Ignatov,Radu Timofte.Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning?[EB/OL].(2025-04-08)[2025-04-30].https://arxiv.org/abs/2504.06006.点此复制
评论