|国家预印本平台
首页|$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

来源:Arxiv_logoArxiv
英文摘要

To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks. With a computationally efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called $\mu$-MoE. Several experiments demonstrate that $\mu$-MoE can dynamically adapt to task/prompt-dependent structured sparsity on the fly.

Toshiaki Koike-Akino、Jing Liu、Ye Wang

计算技术、计算机技术

Toshiaki Koike-Akino,Jing Liu,Ye Wang.$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts[EB/OL].(2025-05-23)[2025-06-10].https://arxiv.org/abs/2505.18451.点此复制

评论