|国家预印本平台
首页|Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware

Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware

Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware

来源:Arxiv_logoArxiv
英文摘要

Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly nonideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.

Alpha Renner、Martin Ziegler、Giacomo Indiveri、Emre Neftci、Madison Cotteret、Elisabetta Chicca、Junren Chen、Hugh Greatorex、Huaqiang Wu

10.1088/2634-4386/ada851

计算技术、计算机技术电子技术应用微电子学、集成电路

Alpha Renner,Martin Ziegler,Giacomo Indiveri,Emre Neftci,Madison Cotteret,Elisabetta Chicca,Junren Chen,Hugh Greatorex,Huaqiang Wu.Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware[EB/OL].(2024-05-02)[2025-08-03].https://arxiv.org/abs/2405.01305.点此复制

评论