Model-free reinforcement learning with noisy actions for automated experimental control in optics
Model-free reinforcement learning with noisy actions for automated experimental control in optics
Setting up and controlling optical systems is often a challenging and tedious task. The high number of degrees of freedom to control mirrors, lenses, or phases of light makes automatic control challenging, especially when the complexity of the system cannot be adequately modeled due to noise or non-linearities. Here, we show that reinforcement learning (RL) can overcome these challenges when coupling laser light into an optical fiber, using a model-free RL approach that trains directly on the experiment without pre-training on simulations. By utilizing the sample-efficient algorithms Soft Actor-Critic (SAC), Truncated Quantile Critics (TQC), or CrossQ, our agents learn to couple with 90% efficiency. A human expert reaches this efficiency, but the RL agents are quicker. In particular, the CrossQ agent outperforms the other agents in coupling speed while requiring only half the training time. We demonstrate that direct training on an experiment can replace extensive system modeling. Our result exemplifies RL's potential to tackle problems in optics, paving the way for more complex applications where full noise modeling is not feasible.
Tobias J. Osborne、Lea Richtmann、Aaron Tranter、Dennis Wilken、Viktoria-S. Schmiesing、Jan Heine、Avishek Anand、Michèle Heurs
自动化技术、自动化技术设备计算技术、计算机技术
Tobias J. Osborne,Lea Richtmann,Aaron Tranter,Dennis Wilken,Viktoria-S. Schmiesing,Jan Heine,Avishek Anand,Michèle Heurs.Model-free reinforcement learning with noisy actions for automated experimental control in optics[EB/OL].(2025-08-17)[2025-09-07].https://arxiv.org/abs/2405.15421.点此复制
评论