|国家预印本平台
首页|Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling

Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling

Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling

来源:Arxiv_logoArxiv
英文摘要

Gaussian Process (GP) Variational Autoencoders (VAEs) extend standard VAEs by replacing the fully factorised Gaussian prior with a GP prior, thereby capturing richer correlations among latent variables. However, performing exact GP inference in large-scale GPVAEs is computationally prohibitive, often forcing existing approaches to rely on restrictive kernel assumptions or large sets of inducing points. In this work, we propose a neighbour-driven approximation strategy that exploits local adjacencies in the latent space to achieve scalable GPVAE inference. By confining computations to the nearest neighbours of each data point, our method preserves essential latent dependencies, allowing more flexible kernel choices and mitigating the need for numerous inducing points. Through extensive experiments on tasks including representation learning, data imputation, and conditional generation, we demonstrate that our approach outperforms other GPVAE variants in both predictive performance and computational efficiency.

Xinxing Shi、Xiaoyu Jiang、Mauricio A. álvarez

计算技术、计算机技术

Xinxing Shi,Xiaoyu Jiang,Mauricio A. álvarez.Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling[EB/OL].(2025-05-22)[2025-06-05].https://arxiv.org/abs/2505.16481.点此复制

评论