|国家预印本平台
首页|Grower-in-the-Loop Interactive Reinforcement Learning for Greenhouse Climate Control

Grower-in-the-Loop Interactive Reinforcement Learning for Greenhouse Climate Control

Grower-in-the-Loop Interactive Reinforcement Learning for Greenhouse Climate Control

来源:Arxiv_logoArxiv
英文摘要

Climate control is crucial for greenhouse production as it directly affects crop growth and resource use. Reinforcement learning (RL) has received increasing attention in this field, but still faces challenges, including limited training efficiency and high reliance on initial learning conditions. Interactive RL, which combines human (grower) input with the RL agent's learning, offers a potential solution to overcome these challenges. However, interactive RL has not yet been applied to greenhouse climate control and may face challenges related to imperfect inputs. Therefore, this paper aims to explore the possibility and performance of applying interactive RL with imperfect inputs into greenhouse climate control, by: (1) developing three representative interactive RL algorithms tailored for greenhouse climate control (reward shaping, policy shaping and control sharing); (2) analyzing how input characteristics are often contradicting, and how the trade-offs between them make grower's inputs difficult to perfect; (3) proposing a neural network-based approach to enhance the robustness of interactive RL agents under limited input availability; (4) conducting a comprehensive evaluation of the three interactive RL algorithms with imperfect inputs in a simulated greenhouse environment. The demonstration shows that interactive RL incorporating imperfect grower inputs has the potential to improve the performance of the RL agent. RL algorithms that influence action selection, such as policy shaping and control sharing, perform better when dealing with imperfect inputs, achieving 8.4% and 6.8% improvement in profit, respectively. In contrast, reward shaping, an algorithm that manipulates the reward function, is sensitive to imperfect inputs and leads to a 9.4% decrease in profit. This highlights the importance of selecting an appropriate mechanism when incorporating imperfect inputs.

Maxiu Xiao、Jianglin Lan、Jingxing Yu、Eldert van Henten、Congcong Sun

农业科学技术发展自动化技术、自动化技术设备

Maxiu Xiao,Jianglin Lan,Jingxing Yu,Eldert van Henten,Congcong Sun.Grower-in-the-Loop Interactive Reinforcement Learning for Greenhouse Climate Control[EB/OL].(2025-05-29)[2025-06-14].https://arxiv.org/abs/2505.23355.点此复制

评论