|国家预印本平台
首页|Sample Complexity of Diffusion Model Training Without Empirical Risk Minimizer Access

Sample Complexity of Diffusion Model Training Without Empirical Risk Minimizer Access

Sample Complexity of Diffusion Model Training Without Empirical Risk Minimizer Access

来源:Arxiv_logoArxiv
英文摘要

Diffusion models have demonstrated state-of-the-art performance across vision, language, and scientific domains. Despite their empirical success, prior theoretical analyses of the sample complexity suffer from poor scaling with input data dimension or rely on unrealistic assumptions such as access to exact empirical risk minimizers. In this work, we provide a principled analysis of score estimation, establishing a sample complexity bound of $\widetilde{\mathcal{O}}(\epsilon^{-6})$. Our approach leverages a structured decomposition of the score estimation error into statistical, approximation, and optimization errors, enabling us to eliminate the exponential dependence on neural network parameters that arises in prior analyses. It is the first such result which achieves sample complexity bounds without assuming access to the empirical risk minimizer of score function estimation loss.

Mudit Gaur、Prashant Trivedi、Sasidhar Kunapuli、Amrit Singh Bedi、Vaneet Aggarwal

计算技术、计算机技术

Mudit Gaur,Prashant Trivedi,Sasidhar Kunapuli,Amrit Singh Bedi,Vaneet Aggarwal.Sample Complexity of Diffusion Model Training Without Empirical Risk Minimizer Access[EB/OL].(2025-05-23)[2025-06-06].https://arxiv.org/abs/2505.18344.点此复制

评论