|国家预印本平台
首页|Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications

Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications

Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications

来源:Arxiv_logoArxiv
英文摘要

The objective of this proposal is to bridge the gap between Deep Learning (DL) and System Dynamics (SD) by developing an interpretable neural system dynamics framework. While DL excels at learning complex models and making accurate predictions, it lacks interpretability and causal reliability. Traditional SD approaches, on the other hand, provide transparency and causal insights but are limited in scalability and require extensive domain knowledge. To overcome these limitations, this project introduces a Neural System Dynamics pipeline, integrating Concept-Based Interpretability, Mechanistic Interpretability, and Causal Machine Learning. This framework combines the predictive power of DL with the interpretability of traditional SD models, resulting in both causal reliability and scalability. The efficacy of the proposed pipeline will be validated through real-world applications of the EU-funded AutoMoTIF project, which is focused on autonomous multimodal transportation systems. The long-term goal is to collect actionable insights that support the integration of explainability and safety in autonomous systems.

Riccardo D'Elia

计算技术、计算机技术自动化技术、自动化技术设备

Riccardo D'Elia.Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications[EB/OL].(2025-05-20)[2025-07-16].https://arxiv.org/abs/2505.14428.点此复制

评论