|国家预印本平台
首页|RLBenchNet: The Right Network for the Right Reinforcement Learning Task

RLBenchNet: The Right Network for the Right Reinforcement Learning Task

RLBenchNet: The Right Network for the Right Reinforcement Learning Task

来源:Arxiv_logoArxiv
英文摘要

Reinforcement learning (RL) has seen significant advancements through the application of various neural network architectures. In this study, we systematically investigate the performance of several neural networks in RL tasks, including Long Short-Term Memory (LSTM), Multi-Layer Perceptron (MLP), Mamba/Mamba-2, Transformer-XL, Gated Transformer-XL, and Gated Recurrent Unit (GRU). Through comprehensive evaluation across continuous control, discrete decision-making, and memory-based environments, we identify architecture-specific strengths and limitations. Our results reveal that: (1) MLPs excel in fully observable continuous control tasks, providing an optimal balance of performance and efficiency; (2) recurrent architectures like LSTM and GRU offer robust performance in partially observable environments with moderate memory requirements; (3) Mamba models achieve a 4.5x higher throughput compared to LSTM and a 3.9x increase over GRU, all while maintaining comparable performance; and (4) only Transformer-XL, Gated Transformer-XL, and Mamba-2 successfully solve the most challenging memory-intensive tasks, with Mamba-2 requiring 8x less memory than Transformer-XL. These findings provide insights for researchers and practitioners, enabling more informed architecture selection based on specific task characteristics and computational constraints. Code is available at: https://github.com/SafeRL-Lab/RLBenchNet

Ivan Smirnov、Shangding Gu

计算技术、计算机技术

Ivan Smirnov,Shangding Gu.RLBenchNet: The Right Network for the Right Reinforcement Learning Task[EB/OL].(2025-05-20)[2025-07-02].https://arxiv.org/abs/2505.15040.点此复制

评论