Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions
Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions
Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision assumptions. Here we study auto-regressive Transformers with linearised attention, a.k.a. linear Transformers (LTs) or Fast Weight Programmers (FWPs). LTs are special in the sense that they are equivalent to RNN-like sequence processors with a fixed-size state, while they can also be expressed as the now-popular self-attention networks. We show that many well-known results for the standard Transformer directly transfer to LTs/FWPs. Our formal language recognition experiments demonstrate how recently proposed FWP extensions such as recurrent FWPs and self-referential weight matrices successfully overcome certain limitations of the LT, e.g., allowing for generalisation on the parity problem. Our code is public.
R¨?bert Csord¨¢s、J¨1rgen Schmidhuber、Kazuki Irie
计算技术、计算机技术
R¨?bert Csord¨¢s,J¨1rgen Schmidhuber,Kazuki Irie.Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions[EB/OL].(2023-10-24)[2025-06-05].https://arxiv.org/abs/2310.16076.点此复制
评论