|国家预印本平台
首页|Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games

Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games

Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games

来源:Arxiv_logoArxiv
英文摘要

While traditionally a labour intensive task, the testing of game content is progressively becoming more automated. Among the many directions in which this automation is taking shape, automatic play-testing is one of the most promising thanks also to advancements of many supervised and reinforcement learning (RL) algorithms. However these type of algorithms, while extremely powerful, often suffer in production environments due to issues with reliability and transparency in their training and usage. In this research work we are investigating and evaluating strategies to apply the popular RL method Proximal Policy Optimization (PPO) in a casual mobile puzzle game with a specific focus on improving its reliability in training and generalization during game playing. We have implemented and tested a number of different strategies against a real-world mobile puzzle game (Lily's Garden from Tactile Games). We isolated the conditions that lead to a failure in either training or generalization during testing and we identified a few strategies to ensure a more stable behaviour of the algorithm in this game genre.

Paolo Burelli、Jeppe Theiss Kristensen

10.1145/3402942.3402944

自动化技术、自动化技术设备计算技术、计算机技术

Paolo Burelli,Jeppe Theiss Kristensen.Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games[EB/OL].(2020-07-03)[2025-08-02].https://arxiv.org/abs/2007.01542.点此复制

评论