Galaxy Imaging with Generative Models: Insights from a Two-Models Framework
Galaxy Imaging with Generative Models: Insights from a Two-Models Framework
Generative models have recently revolutionized image generation tasks across diverse domains, including galaxy image synthesis. This study investigates the statistical learning and consistency of three generative models: light-weight-gan (a GAN-based model), Glow (a Normalizing Flow-based model), and a diffusion model based on a U-Net denoiser, all trained on non-overlapping subsets of the SDSS dataset of 64x64 grayscale images. While all models produce visually realistic images with well-preserved morphological variable distributions, we focus on their ability to learn and generalize the underlying data distribution. The diffusion model shows a transition from memorization to generalization as the dataset size increases, confirming previous findings. Smaller datasets lead to overfitting, while larger datasets enable novel sample generation, supported by the denoising process's theoretical basis. For the flow-based model, we propose an "inversion test" leveraging its bijective nature. Similarly, the GAN-based model achieves comparable morphological consistency but lacks bijectivity. We then introduce a "discriminator test", which shows successful learning for larger datasets but poorer confidence with smaller ones. Across all models, dataset sizes below O(100,000) pose challenges to learning. Along our experiments, the "two-models" framework enables robust evaluations, highlighting both the potential and limitations of these models. These findings provide valuable insights into statistical learning in generative modeling, with applications certainly extending beyond galaxy image generation.
Jean-Eric Campagne
天文学
Jean-Eric Campagne.Galaxy Imaging with Generative Models: Insights from a Two-Models Framework[EB/OL].(2025-03-29)[2025-04-28].https://arxiv.org/abs/2503.23127.点此复制
评论