|国家预印本平台
首页|On Minimax Estimation of Parameters in Softmax-Contaminated Mixture of Experts

On Minimax Estimation of Parameters in Softmax-Contaminated Mixture of Experts

On Minimax Estimation of Parameters in Softmax-Contaminated Mixture of Experts

来源:Arxiv_logoArxiv
英文摘要

The softmax-contaminated mixture of experts (MoE) model is deployed when a large-scale pre-trained model, which plays the role of a fixed expert, is fine-tuned for learning downstream tasks by including a new contamination part, or prompt, functioning as a new, trainable expert. Despite its popularity and relevance, the theoretical properties of the softmax-contaminated MoE have remained unexplored in the literature. In the paper, we study the convergence rates of the maximum likelihood estimator of gating and prompt parameters in order to gain insights into the statistical properties and potential challenges of fine-tuning with a new prompt. We find that the estimability of these parameters is compromised when the prompt acquires overlapping knowledge with the pre-trained model, in the sense that we make precise by formulating a novel analytic notion of distinguishability. Under distinguishability of the pre-trained and prompt models, we derive minimax optimal estimation rates for all the gating and prompt parameters. By contrast, when the distinguishability condition is violated, these estimation rates become significantly slower due to their dependence on the prompt convergence rate to the pre-trained model. Finally, we empirically corroborate our theoretical findings through several numerical experiments.

Fanqi Yan、Huy Nguyen、Dung Le、Pedram Akbarian、Nhat Ho、Alessandro Rinaldo

计算技术、计算机技术

Fanqi Yan,Huy Nguyen,Dung Le,Pedram Akbarian,Nhat Ho,Alessandro Rinaldo.On Minimax Estimation of Parameters in Softmax-Contaminated Mixture of Experts[EB/OL].(2025-05-23)[2025-06-10].https://arxiv.org/abs/2505.18455.点此复制

评论