|国家预印本平台
首页|Offline reinforcement learning for job-shop scheduling problems

Offline reinforcement learning for job-shop scheduling problems

Offline reinforcement learning for job-shop scheduling problems

来源:Arxiv_logoArxiv
英文摘要

Recent advances in deep learning have shown significant potential for solving combinatorial optimization problems in real-time. Unlike traditional methods, deep learning can generate high-quality solutions efficiently, which is crucial for applications like routing and scheduling. However, existing approaches like deep reinforcement learning (RL) and behavioral cloning have notable limitations, with deep RL suffering from slow learning and behavioral cloning relying solely on expert actions, which can lead to generalization issues and neglect of the optimization objective. This paper introduces a novel offline RL method designed for combinatorial optimization problems with complex constraints, where the state is represented as a heterogeneous graph and the action space is variable. Our approach encodes actions in edge attributes and balances expected rewards with the imitation of expert solutions. We demonstrate the effectiveness of this method on job-shop scheduling and flexible job-shop scheduling benchmarks, achieving superior performance compared to state-of-the-art techniques.

Imanol Echeverria、Maialen Murua、Roberto Santana

自动化基础理论计算技术、计算机技术

Imanol Echeverria,Maialen Murua,Roberto Santana.Offline reinforcement learning for job-shop scheduling problems[EB/OL].(2024-10-21)[2025-08-02].https://arxiv.org/abs/2410.15714.点此复制

评论