|国家预印本平台
首页|Accelerated Integration of Stiff Reactive Systems Using Gradient-Informed Autoencoder and Neural Ordinary Differential Equation

Accelerated Integration of Stiff Reactive Systems Using Gradient-Informed Autoencoder and Neural Ordinary Differential Equation

Accelerated Integration of Stiff Reactive Systems Using Gradient-Informed Autoencoder and Neural Ordinary Differential Equation

来源:Arxiv_logoArxiv
英文摘要

A combined autoencoder (AE) and neural ordinary differential equation (NODE) framework has been used as a data-driven reduced-order model for time integration of a stiff reacting system. In this study, a new loss term using a latent variable gradient is proposed, and its impact on model performance is analyzed in terms of robustness, accuracy, and computational efficiency. A data set was generated by a chemical reacting solver, Cantera, for the ignition of homogeneous hydrogen-air and ammonia/hydrogen-air mixtures in homogeneous constant pressure reactors over a range of initial temperatures and equivalence ratios. The AE-NODE network was trained with the data set using two different loss functions based on the latent variable mapping and the latent gradient. The results show that the model trained using the latent gradient loss significantly improves the predictions at conditions outside the range of the trained data. The study demonstrates the importance of incorporating time derivatives in the loss function. Upon proper design of the latent space and training method, the AE+NODE architecture is found to predict the reaction dynamics at high fidelity at substantially reduced computational cost by the reduction of the dimensionality and temporal stiffness.

Mert Yakup Baykan、Vijayamanikandan Vijayarangan、Dong-hyuk Shin、Hong G. Im

化学计算技术、计算机技术

Mert Yakup Baykan,Vijayamanikandan Vijayarangan,Dong-hyuk Shin,Hong G. Im.Accelerated Integration of Stiff Reactive Systems Using Gradient-Informed Autoencoder and Neural Ordinary Differential Equation[EB/OL].(2025-05-03)[2025-06-24].https://arxiv.org/abs/2505.01957.点此复制

评论