|国家预印本平台
首页|Enhancing Interpretability of Sparse Latent Representations with Class Information

Enhancing Interpretability of Sparse Latent Representations with Class Information

Enhancing Interpretability of Sparse Latent Representations with Class Information

来源:Arxiv_logoArxiv
英文摘要

Variational Autoencoders (VAEs) are powerful generative models for learning latent representations. Standard VAEs generate dispersed and unstructured latent spaces by utilizing all dimensions, which limits their interpretability, especially in high-dimensional spaces. To address this challenge, Variational Sparse Coding (VSC) introduces a spike-and-slab prior distribution, resulting in sparse latent representations for each input. These sparse representations, characterized by a limited number of active dimensions, are inherently more interpretable. Despite this advantage, VSC falls short in providing structured interpretations across samples within the same class. Intuitively, samples from the same class are expected to share similar attributes while allowing for variations in those attributes. This expectation should manifest as consistent patterns of active dimensions in their latent representations, but VSC does not enforce such consistency. In this paper, we propose a novel approach to enhance the latent space interpretability by ensuring that the active dimensions in the latent space are consistent across samples within the same class. To achieve this, we introduce a new loss function that encourages samples from the same class to share similar active dimensions. This alignment creates a more structured and interpretable latent space, where each shared dimension corresponds to a high-level concept, or "factor." Unlike existing disentanglement-based methods that primarily focus on global factors shared across all classes, our method captures both global and class-specific factors, thereby enhancing the utility and interpretability of latent representations.

Farshad Sangari Abiz、Reshad Hosseini、Babak N. Araabi

计算技术、计算机技术

Farshad Sangari Abiz,Reshad Hosseini,Babak N. Araabi.Enhancing Interpretability of Sparse Latent Representations with Class Information[EB/OL].(2025-05-20)[2025-06-15].https://arxiv.org/abs/2505.14476.点此复制

评论