|国家预印本平台
首页|Selection Mechanisms for Sequence Modeling using Linear State Space Models

Selection Mechanisms for Sequence Modeling using Linear State Space Models

Selection Mechanisms for Sequence Modeling using Linear State Space Models

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in language modeling tasks have been driven by architectures such as Transformers and, more recently, by Selective State Space Models (SSMs). In this paper, we introduce an alternative selection mechanism inspired by control theory methodologies. Specifically, we propose a novel residual generator for selection, drawing an analogy to fault detection strategies in Linear Time-Invariant (LTI) systems. Unlike Mamba, which utilizes Linear Time-Varying (LTV) systems, our approach combines multiple LTI systems, preserving their beneficial properties during training while achieving comparable selectivity. To evaluate the effectiveness of the proposed architecture, we test its performance on synthetic tasks. While these tasks are not inherently critical, they serve as benchmarks to test the selectivity properties of different cores architecture. This work highlights the potential of integrating theoretical insights with experimental advancements, offering a complementary perspective to deep learning innovations at the intersection of control theory and machine learning.

Umberto Casti、Sandro Zampieri、Fabio Pasqualetti

自动化基础理论

Umberto Casti,Sandro Zampieri,Fabio Pasqualetti.Selection Mechanisms for Sequence Modeling using Linear State Space Models[EB/OL].(2025-05-23)[2025-06-18].https://arxiv.org/abs/2505.17932.点此复制

评论