Efficient Finite Initialization with Partial Norms for Tensorized Neural Networks and Tensor Networks Algorithms
Efficient Finite Initialization with Partial Norms for Tensorized Neural Networks and Tensor Networks Algorithms
We present two algorithms to initialize layers of tensorized neural networks and general tensor network algorithms using partial computations of their Frobenius norms and lineal entrywise norms, depending on the type of tensor network involved. The core of this method is the use of the norm of subnetworks of the tensor network in an iterative way, so that we normalize by the finite values of the norms that led to the divergence or zero norm. In addition, the method benefits from the reuse of intermediate calculations. We have also applied it to the Matrix Product State/Tensor Train (MPS/TT) and Matrix Product Operator/Tensor Train Matrix (MPO/TT-M) layers and have seen its scaling versus the number of nodes, bond dimension, and physical dimension. All code is publicly available.
Aitor Moreno Fdez. de Leceta、Iñigo Perez Delgado、Alejandro Mata Ali、Marina Ristol Roura
计算技术、计算机技术
Aitor Moreno Fdez. de Leceta,Iñigo Perez Delgado,Alejandro Mata Ali,Marina Ristol Roura.Efficient Finite Initialization with Partial Norms for Tensorized Neural Networks and Tensor Networks Algorithms[EB/OL].(2025-07-04)[2025-07-21].https://arxiv.org/abs/2309.06577.点此复制
评论