Category-based Galaxy Image Generation via Diffusion Models
Category-based Galaxy Image Generation via Diffusion Models
Conventional galaxy generation methods rely on semi-analytical models and hydrodynamic simulations, which are highly dependent on physical assumptions and parameter tuning. In contrast, data-driven generative models do not have explicit physical parameters pre-determined, and instead learn them efficiently from observational data, making them alternative solutions to galaxy generation. Among these, diffusion models outperform Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) in quality and diversity. Leveraging physical prior knowledge to these models can further enhance their capabilities. In this work, we present GalCatDiff, the first framework in astronomy to leverage both galaxy image features and astrophysical properties in the network design of diffusion models. GalCatDiff incorporates an enhanced U-Net and a novel block entitled Astro-RAB (Residual Attention Block), which dynamically combines attention mechanisms with convolution operations to ensure global consistency and local feature fidelity. Moreover, GalCatDiff uses category embeddings for class-specific galaxy generation, avoiding the high computational costs of training separate models for each category. Our experimental results demonstrate that GalCatDiff significantly outperforms existing methods in terms of the consistency of sample color and size distributions, and the generated galaxies are both visually realistic and physically consistent. This framework will enhance the reliability of galaxy simulations and can potentially serve as a data augmentor to support future galaxy classification algorithm development.
Xingzhong Fan、Hongming Tang、Yue Zeng、M. B. N. Kouwenhoven、Guangquan Zeng
天文学计算技术、计算机技术
Xingzhong Fan,Hongming Tang,Yue Zeng,M. B. N. Kouwenhoven,Guangquan Zeng.Category-based Galaxy Image Generation via Diffusion Models[EB/OL].(2025-06-19)[2025-07-03].https://arxiv.org/abs/2506.16255.点此复制
评论