Information-theoretic Generalization Analysis for VQ-VAEs: A Role of Latent Variables
Information-theoretic Generalization Analysis for VQ-VAEs: A Role of Latent Variables
Latent variables (LVs) play a crucial role in encoder-decoder models by enabling effective data compression, prediction, and generation. Although their theoretical properties, such as generalization, have been extensively studied in supervised learning, similar analyses for unsupervised models such as variational autoencoders (VAEs) remain insufficiently underexplored. In this work, we extend information-theoretic generalization analysis to vector-quantized (VQ) VAEs with discrete latent spaces, introducing a novel data-dependent prior to rigorously analyze the relationship among LVs, generalization, and data generation. We derive a novel generalization error bound of the reconstruction loss of VQ-VAEs, which depends solely on the complexity of LVs and the encoder, independent of the decoder. Additionally, we provide the upper bound of the 2-Wasserstein distance between the distributions of the true data and the generated data, explaining how the regularization of the LVs contributes to the data generation performance.
Futoshi Futami、Masahiro Fujisawa
计算技术、计算机技术
Futoshi Futami,Masahiro Fujisawa.Information-theoretic Generalization Analysis for VQ-VAEs: A Role of Latent Variables[EB/OL].(2025-05-25)[2025-06-14].https://arxiv.org/abs/2505.19470.点此复制
评论