|国家预印本平台
首页|Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics Simulations

Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics Simulations

Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics Simulations

来源:Arxiv_logoArxiv
英文摘要

Configuring computational fluid dynamics (CFD) simulations typically demands extensive domain expertise, limiting broader access. Although large language models (LLMs) have advanced scientific computing, their use in automating CFD workflows is underdeveloped. We introduce a novel approach centered on domain-specific LLM adaptation. By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM, our custom dataset of 28716 natural language-to-OpenFOAM configuration pairs with chain-of-thought (CoT) annotations, we enable direct translation from natural language descriptions to executable CFD setups. A multi-agent framework orchestrates the process, autonomously verifying inputs, generating configurations, running simulations, and correcting errors. Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance, achieving 88.7% solution accuracy and 82.6% first-attempt success rate. This significantly outperforms larger general-purpose models like Qwen2.5-72B-Instruct, DeepSeek-R1, and Llama3.3-70B-Instruct, while also requiring fewer correction iterations and maintaining high computational efficiency. The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows. Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.

Zhehao Dong、Zhen Lu、Yue Yang

10.1016/j.taml.2025.100594

自动化技术、自动化技术设备

Zhehao Dong,Zhen Lu,Yue Yang.Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics Simulations[EB/OL].(2025-04-13)[2025-06-07].https://arxiv.org/abs/2504.09602.点此复制

评论