|国家预印本平台
首页|Large Language Model-Based Automatic Formulation for Stochastic Optimization Models

Large Language Model-Based Automatic Formulation for Stochastic Optimization Models

Large Language Model-Based Automatic Formulation for Stochastic Optimization Models

来源:Arxiv_logoArxiv
英文摘要

This paper presents the first integrated systematic study on the performance of large language models (LLMs), specifically ChatGPT, to automatically formulate and solve stochastic optimiza- tion problems from natural language descriptions. Focusing on three key categories, joint chance- constrained models, individual chance-constrained models, and two-stage stochastic linear programs (SLP-2), we design several prompts that guide ChatGPT through structured tasks using chain-of- thought and modular reasoning. We introduce a novel soft scoring metric that evaluates the struc- tural quality and partial correctness of generated models, addressing the limitations of canonical and execution-based accuracy. Across a diverse set of stochastic problems, GPT-4-Turbo outperforms other models in partial score, variable matching, and objective accuracy, with cot_s_instructions and agentic emerging as the most effective prompting strategies. Our findings reveal that with well-engineered prompts and multi-agent collaboration, LLMs can facilitate specially stochastic formulations, paving the way for intelligent, language-driven modeling pipelines in stochastic opti- mization.

Amirreza Talebi

数学计算技术、计算机技术

Amirreza Talebi.Large Language Model-Based Automatic Formulation for Stochastic Optimization Models[EB/OL].(2025-08-24)[2025-09-05].https://arxiv.org/abs/2508.17200.点此复制

评论