Graph-Reward-SQL: Execution-Free Reinforcement Learning for Text-to-SQL via Graph Matching and Stepwise Reward
Graph-Reward-SQL: Execution-Free Reinforcement Learning for Text-to-SQL via Graph Matching and Stepwise Reward
Reinforcement learning (RL) has been widely adopted to enhance the performance of large language models (LLMs) on Text-to-SQL tasks. However, existing methods often rely on execution-based or LLM-based Bradley-Terry reward models. The former suffers from high execution latency caused by repeated database calls, whereas the latter imposes substantial GPU memory overhead, both of which significantly hinder the efficiency and scalability of RL pipelines. To this end, we propose a novel Text-to-SQL RL fine-tuning framework named Graph-Reward-SQL, which employs the GMNScore outcome reward model. We leverage SQL graph representations to provide accurate reward signals while significantly reducing inference time and GPU memory usage. Building on this foundation, we further introduce StepRTM, a stepwise reward model that provides intermediate supervision over Common Table Expression (CTE) subqueries. This encourages both functional correctness and structural clarity of SQL. Extensive comparative and ablation experiments on standard benchmarks, including Spider and BIRD, demonstrate that our method consistently outperforms existing reward models.
Han Weng、Cui Longjie、Yang Sun、Xing Chen、Puzhen Wu、Yi Zhan、Boyi Liu、Yuanfeng Song、Dun Zeng、Yingxiang Yang、Qianru Zhang、Xiaoming Yin、Dong Huang
计算技术、计算机技术
Han Weng,Cui Longjie,Yang Sun,Xing Chen,Puzhen Wu,Yi Zhan,Boyi Liu,Yuanfeng Song,Dun Zeng,Yingxiang Yang,Qianru Zhang,Xiaoming Yin,Dong Huang.Graph-Reward-SQL: Execution-Free Reinforcement Learning for Text-to-SQL via Graph Matching and Stepwise Reward[EB/OL].(2025-06-27)[2025-08-02].https://arxiv.org/abs/2505.12380.点此复制
评论