|国家预印本平台
| 注册
首页|Addressing Representation Collapse in Vector Quantized Models with One Linear Layer
来源:Arxiv_logoArxiv

Addressing Representation Collapse in Vector Quantized Models with One Linear Layer

Addressing Representation Collapse in Vector Quantized Models with One Linear Layer

Yongxin Zhu Bocheng Li Linli Xu Yifei Xin Zhihua Xia

计算技术、计算机技术

Yongxin Zhu,Bocheng Li,Linli Xu,Yifei Xin,Zhihua Xia.Addressing Representation Collapse in Vector Quantized Models with One Linear Layer[EB/OL].(2025-10-03)[2025-10-10].https://arxiv.org/abs/2411.02038.点此复制

Vector Quantization (VQ) is essential for discretizing continuous representations in unsupervised learning but suffers from representation collapse, causing low codebook utilization and limiting scalability. Existing solutions often rely on complex optimizations or reduce latent dimensionality, which compromises model capacity and fails to fully solve the problem. We identify the root cause as disjoint codebook optimization, where only a few code vectors are updated via gradient descent. To fix this, we propose \textbf{Sim}ple\textbf{VQ}, which reparameterizes code vectors through a learnable linear transformation layer over a latent basis, optimizing the \textit{entire linear space} rather than nearest \textit{individual code vectors}. Although the multiplication of two linear matrices is equivalent to applying a single linear layer, this simple approach effectively prevents collapse. Extensive experiments on image and audio tasks demonstrate that SimVQ improves codebook usage, is easy to implement, and generalizes well across modalities and architectures. The code is available at https://github.com/youngsheen/SimVQ.
展开英文信息

评论