|国家预印本平台
首页|Variational Learning of Disentangled Representations

Variational Learning of Disentangled Representations

Variational Learning of Disentangled Representations

来源:Arxiv_logoArxiv
英文摘要

Disentangled representations enable models to separate factors of variation that are shared across experimental conditions from those that are condition-specific. This separation is essential in domains such as biomedical data analysis, where generalization to new treatments, patients, or species depends on isolating stable biological signals from context-dependent effects. While extensions of the variational autoencoder (VAE) framework have been proposed to address this problem, they frequently suffer from leakage between latent representations, limiting their ability to generalize to unseen conditions. Here, we introduce DISCoVeR, a new variational framework that explicitly separates condition-invariant and condition-specific factors. DISCoVeR integrates three key components: (i) a dual-latent architecture that models shared and specific factors separately; (ii) two parallel reconstructions that ensure both representations remain informative; and (iii) a novel max-min objective that encourages clean separation without relying on handcrafted priors, while making only minimal assumptions. Theoretically, we show that this objective maximizes data likelihood while promoting disentanglement, and that it admits a unique equilibrium. Empirically, we demonstrate that DISCoVeR achieves improved disentanglement on synthetic datasets, natural images, and single-cell RNA-seq data. Together, these results establish DISCoVeR as a principled approach for learning disentangled representations in multi-condition settings.

Yuli Slavutsky、Ozgur Beker、David Blei、Bianca Dumitrascu

生物科学研究方法、生物科学研究技术

Yuli Slavutsky,Ozgur Beker,David Blei,Bianca Dumitrascu.Variational Learning of Disentangled Representations[EB/OL].(2025-06-20)[2025-07-23].https://arxiv.org/abs/2506.17182.点此复制

评论