Hypothesis Spaces for Deep Learning
Hypothesis Spaces for Deep Learning
This paper introduces a hypothesis space for deep learning based on deep neural networks (DNNs). By treating a DNN as a function of two variables - the input variable and the parameter variable - we consider the set of DNNs where the parameter variable belongs to a space of weight matrices and biases determined by a prescribed depth and layer widths. To construct a Banach space of functions of the input variable, we take the weak* closure of the linear span of this DNN set. We prove that the resulting Banach space is a reproducing kernel Banach space (RKBS) and explicitly construct its reproducing kernel. Furthermore, we investigate two learning models - regularized learning and the minimum norm interpolation (MNI) problem - within the RKBS framework by establishing representer theorems. These theorems reveal that the solutions to these learning problems can be expressed as a finite sum of kernel expansions based on training data.
Rui Wang、Yuesheng Xu、Mingsong Yan
计算技术、计算机技术
Rui Wang,Yuesheng Xu,Mingsong Yan.Hypothesis Spaces for Deep Learning[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2403.03353.点此复制
评论