Pseudo-likelihood produces associative memories able to generalize, even for asymmetric couplings
Pseudo-likelihood produces associative memories able to generalize, even for asymmetric couplings
Energy-based probabilistic models learned by maximizing the likelihood of the data are limited by the intractability of the partition function. A widely used workaround is to maximize the pseudo-likelihood, which replaces the global normalization with tractable local normalizations. Here we show that, in the zero-temperature limit, a network trained to maximize pseudo-likelihood naturally implements an associative memory: if the training set is small, patterns become fixed-point attractors whose basins of attraction exceed those of any classical Hopfield rule. We explain quantitatively this effect on uncorrelated random patterns. Moreover, we show that, for different structured datasets coming from computer science (random feature model, MNIST), physics (spin glasses) and biology (proteins), as the number of training examples increases the learned network goes beyond memorization, developing meaningful attractors with non-trivial correlations with test examples, thus showing the ability to generalize. Our results therefore reveal pseudo-likelihood works both as an efficient inference tool and as a principled mechanism for memory and generalization.
Francesco D'Amico、Dario Bocchi、Luca Maria Del Bono、Saverio Rossi、Matteo Negri
计算技术、计算机技术
Francesco D'Amico,Dario Bocchi,Luca Maria Del Bono,Saverio Rossi,Matteo Negri.Pseudo-likelihood produces associative memories able to generalize, even for asymmetric couplings[EB/OL].(2025-07-07)[2025-08-02].https://arxiv.org/abs/2507.05147.点此复制
评论