|国家预印本平台
首页|Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables

Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables

Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables

来源:Arxiv_logoArxiv
英文摘要

In this paper, we develop a new optimization framework for the least squares learning problem via fully connected neural networks or physics-informed neural networks. The gradient descent sometimes behaves inefficiently in deep learning because of the high non-convexity of loss functions and the vanishing gradient issue. Our idea is to introduce auxiliary variables to separate the layers of the deep neural networks and reformulate the loss functions for ease of optimization. We design the self-adaptive weights to preserve the consistency between the reformulated loss and the original mean squared loss, which guarantees that optimizing the new loss helps optimize the original problem. Numerical experiments are presented to verify the consistency and show the effectiveness and robustness of our models over gradient descent.

Yaru Liu、Yiqi Gu、Michael K. Ng

计算技术、计算机技术

Yaru Liu,Yiqi Gu,Michael K. Ng.Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables[EB/OL].(2025-04-30)[2025-06-09].https://arxiv.org/abs/2504.21501.点此复制

评论