|国家预印本平台
首页|Large Language Model Driven Development of Turbulence Models

Large Language Model Driven Development of Turbulence Models

Large Language Model Driven Development of Turbulence Models

来源:Arxiv_logoArxiv
英文摘要

Artificial intelligence (AI) has achieved human-level performance in specialized tasks such as Go, image recognition, and protein folding, raising the prospect of an AI singularity-where machines not only match but surpass human reasoning. Here, we demonstrate a step toward this vision in the context of turbulence modeling. By treating a large language model (LLM), DeepSeek-R1, as an equal partner, we establish a closed-loop, iterative workflow in which the LLM proposes, refines, and reasons about near-wall turbulence models under adverse pressure gradients (APGs), system rotation, and surface roughness. Through multiple rounds of interaction involving long-chain reasoning and a priori and a posteriori evaluations, the LLM generates models that not only rediscover established strategies but also synthesize new ones that outperform baseline wall models. Specifically, it recommends incorporating a material derivative to capture history effects in APG flows, modifying the law of the wall to account for system rotation, and developing rough-wall models informed by surface statistics. In contrast to conventional data-driven turbulence modeling-often characterized by human-designed, black-box architectures-the models developed here are physically interpretable and grounded in clear reasoning.

Zhongxin Yang、Yuanwei Bin、Yipeng Shi、Xiang I. A. Yang

力学计算技术、计算机技术

Zhongxin Yang,Yuanwei Bin,Yipeng Shi,Xiang I. A. Yang.Large Language Model Driven Development of Turbulence Models[EB/OL].(2025-05-03)[2025-07-01].https://arxiv.org/abs/2505.01681.点此复制

评论