|国家预印本平台
首页|Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models

Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models

Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models

来源:Arxiv_logoArxiv
英文摘要

Attention mechanisms are widely used in artificial intelligence to enhance performance and interpretability. In this paper, we investigate their utility in modeling classical dynamical systems -- specifically, a noisy predator-prey (Lotka-Volterra) system. We train a simple linear attention model on perturbed time-series data to reconstruct system trajectories. Remarkably, the learned attention weights align with the geometric structure of the Lyapunov function: high attention corresponds to flat regions (where perturbations have small effect), and low attention aligns with steep regions (where perturbations have large effect). We further demonstrate that attention-based weighting can serve as a proxy for sensitivity analysis, capturing key phase-space properties without explicit knowledge of the system equations. These results suggest a novel use of AI-derived attention for interpretable, data-driven analysis and control of nonlinear systems. For example our framework could support future work in biological modeling of circadian rhythms, and interpretable machine learning for dynamical environments.

David Balaban

计算技术、计算机技术自动化技术、自动化技术设备生物科学理论、生物科学方法

David Balaban.Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models[EB/OL].(2025-05-10)[2025-06-27].https://arxiv.org/abs/2505.06503.点此复制

评论