Causal pieces: analysing and improving spiking neural networks piece by piece
Causal pieces: analysing and improving spiking neural networks piece by piece
We introduce a novel concept for spiking neural networks (SNNs) derived from the idea of "linear pieces" used to analyse the expressiveness and trainability of artificial neural networks (ANNs). We prove that the input domain of SNNs decomposes into distinct causal regions where its output spike times are locally Lipschitz continuous with respect to the input spike times and network parameters. The number of such regions - which we call "causal pieces" - is a measure of the approximation capabilities of SNNs. In particular, we demonstrate in simulation that parameter initialisations which yield a high number of causal pieces on the training set strongly correlate with SNN training success. Moreover, we find that feedforward SNNs with purely positive weights exhibit a surprisingly high number of causal pieces, allowing them to achieve competitive performance levels on benchmark tasks. We believe that causal pieces are not only a powerful and principled tool for improving SNNs, but might also open up new ways of comparing SNNs and ANNs in the future.
Dominik Dold、Philipp Christian Petersen
计算技术、计算机技术
Dominik Dold,Philipp Christian Petersen.Causal pieces: analysing and improving spiking neural networks piece by piece[EB/OL].(2025-04-18)[2025-05-25].https://arxiv.org/abs/2504.14015.点此复制
评论