|国家预印本平台
首页|A New DAPO Algorithm for Stock Trading

A New DAPO Algorithm for Stock Trading

A New DAPO Algorithm for Stock Trading

来源:Arxiv_logoArxiv
英文摘要

Recent advances in reinforcement learning, such as Dynamic Sampling Policy Optimization (DAPO), show strong performance when paired with large language models (LLMs). Motivated by this success, we ask whether similar gains can be realized in financial trading. We design a trading agent that combines an improved Group Relative Policy Optimization (GRPO) algorithm, augmented with ideas from DAPO, with LLM-based risk and sentiment signals extracted from financial news. On the NASDAQ-100 index (FNSPID dataset), our agent attains a cumulative return of 230.49 percent and an information ratio of 0.37, outperforming the CPPO-DeepSeek baseline. It also cuts training time from about 8 hours to 2.5 hours over 100 epochs while markedly reducing RAM usage. The proposed RL-LLM framework offers a scalable path toward data-efficient trading agents. Code: https://github.com/Ruijian-Zha/FinRL-DAPO-SR/

Ruijian Zha、Bojun Liu

10.1109/IDS66066.2025.00013

财政、金融信息产业经济

Ruijian Zha,Bojun Liu.A New DAPO Algorithm for Stock Trading[EB/OL].(2025-05-09)[2025-06-21].https://arxiv.org/abs/2505.06408.点此复制

评论