Dueling Deep Reinforcement Learning for Financial Time Series
Dueling Deep Reinforcement Learning for Financial Time Series
Reinforcement learning (RL) has emerged as a powerful paradigm for solving decision-making problems in dynamic environments. In this research, we explore the application of Double DQN (DDQN) and Dueling Network Architectures, to financial trading tasks using historical SP500 index data. Our focus is training agents capable of optimizing trading strategies while accounting for practical constraints such as transaction costs. The study evaluates the model performance across scenarios with and without commissions, highlighting the impact of cost-sensitive environments on reward dynamics. Despite computational limitations and the inherent complexity of financial time series data, the agent successfully learned meaningful trading policies. The findings confirm that RL agents, even when trained on limited datasets, can outperform random strategies by leveraging advanced architectures such as DDQN and Dueling Networks. However, significant challenges persist, particularly with a sub-optimal policy due to the complexity of data source.
Bruno Giorgio
财政、金融计算技术、计算机技术
Bruno Giorgio.Dueling Deep Reinforcement Learning for Financial Time Series[EB/OL].(2025-04-15)[2025-05-08].https://arxiv.org/abs/2504.11601.点此复制
评论