Tool-integrated Reinforcement Learning for Repo Deep Search
Tool-integrated Reinforcement Learning for Repo Deep Search
Issue localization, the process of identifying code locations that need modification to resolve software issues, is a critical yet challenging task in software development. The semantic gap between natural language issue descriptions and faulty code requires complex multi-hop reasoning through code dependencies. Existing LLM-based agents attempt to address this by integrating repository retrieval tools. However, this transforms issue localization into a demanding task we call Repo Deep Search, which requires the LLM to effectively utilize various repository retrieval tools throughout a multi-step reasoning and navigation process. To tackle this challenge, we present ToolTrain, a two-stage tool-integrated training framework combining rejection-sampled supervised fine-tuning and tool-integrated reinforcement learning to enhance LLMs' ability to use retrieval tools for issue localization. Experimental results show that ToolTrain-trained models achieve state-of-the-art performance, with our 32B model even surpassing Claude-3.7 on function-level localization. The results also show that improved localization performance translates to better end-to-end issue resolution performance. This further demonstrates that training for issue localization is a viable and effective strategy for improving automated software development.
Zexiong Ma、Chao Peng、Qunhong Zeng、Pengfei Gao、Yanzhen Zou、Bing Xie
计算技术、计算机技术
Zexiong Ma,Chao Peng,Qunhong Zeng,Pengfei Gao,Yanzhen Zou,Bing Xie.Tool-integrated Reinforcement Learning for Repo Deep Search[EB/OL].(2025-08-06)[2025-08-16].https://arxiv.org/abs/2508.03012.点此复制
评论