|国家预印本平台
首页|Step-Opt: Boosting Optimization Modeling in LLMs through Iterative Data Synthesis and Structured Validation

Step-Opt: Boosting Optimization Modeling in LLMs through Iterative Data Synthesis and Structured Validation

Step-Opt: Boosting Optimization Modeling in LLMs through Iterative Data Synthesis and Structured Validation

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have revolutionized various domains but encounter substantial challenges in tackling optimization modeling tasks for Operations Research (OR), particularly when dealing with complex problem. In this work, we propose Step-Opt-Instruct, a framework that augments existing datasets and generates high-quality fine-tuning data tailored to optimization modeling. Step-Opt-Instruct employs iterative problem generation to systematically increase problem complexity and stepwise validation to rigorously verify data, preventing error propagation and ensuring the quality of the generated dataset. Leveraging this framework, we fine-tune open-source LLMs, including LLaMA-3-8B and Mistral-7B, to develop Step-Opt--a model that achieves state-of-the-art performance on benchmarks such as NL4OPT, MAMO, and IndustryOR. Extensive experiments demonstrate the superior performance of Step-Opt, especially in addressing complex OR tasks, with a notable 17.01\% improvement in micro average accuracy on difficult problems. These findings highlight the effectiveness of combining structured validation with gradual problem refinement to advance the automation of decision-making processes using LLMs.The code and dataset are available at https://github.com/samwu-learn/Step.

Yang Wu、Yifan Zhang、Yurong Wu、Yuran Wang、Junkai Zhang、Jian Cheng

计算技术、计算机技术

Yang Wu,Yifan Zhang,Yurong Wu,Yuran Wang,Junkai Zhang,Jian Cheng.Step-Opt: Boosting Optimization Modeling in LLMs through Iterative Data Synthesis and Structured Validation[EB/OL].(2025-06-21)[2025-07-16].https://arxiv.org/abs/2506.17637.点此复制

评论