|国家预印本平台
首页|Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality

Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality

Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality

来源:Arxiv_logoArxiv
英文摘要

The explicit regularization and optimality of deep neural networks estimators from independent data have made considerable progress recently. The study of such properties on dependent data is still a challenge. In this paper, we carry out deep learning from strongly mixing observations, and deal with the squared and a broad class of loss functions. We consider sparse-penalized regularization for deep neural network predictor. For a general framework that includes, regression estimation, classification, time series prediction,$\cdots$, oracle inequality for the expected excess risk is established and a bound on the class of Hölder smooth functions is provided. For nonparametric regression from strong mixing data and sub-exponentially error, we provide an oracle inequality for the $L_2$ error and investigate an upper bound of this error on a class of Hölder composition functions. For the specific case of nonparametric autoregression with Gaussian and Laplace errors, a lower bound of the $L_2$ error on this Hölder composition class is established. Up to logarithmic factor, this bound matches its upper bound; so, the deep neural network estimator attains the minimax optimal rate.

William Kengne、Modou Wade

计算技术、计算机技术

William Kengne,Modou Wade.Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality[EB/OL].(2025-07-08)[2025-07-22].https://arxiv.org/abs/2406.08321.点此复制

评论