|国家预印本平台
首页|Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis

Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis

Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis

来源:Arxiv_logoArxiv
英文摘要

We propose an efficient encrypted policy synthesis to develop privacy-preserving model-based reinforcement learning. We first demonstrate that the relative-entropy-regularized reinforcement learning framework offers a computationally convenient linear and ``min-free'' structure for value iteration, enabling a direct and efficient integration of fully homomorphic encryption with bootstrapping into policy synthesis. Convergence and error bounds are analyzed as encrypted policy synthesis propagates errors under the presence of encryption-induced errors including quantization and bootstrapping. Theoretical analysis is validated by numerical simulations. Results demonstrate the effectiveness of the RERL framework in integrating FHE for encrypted policy synthesis.

Jihoon Suh、Yeongjun Jang、Kaoru Teranishi、Takashi Tanaka

10.1109/LCSYS.2025.3578573

计算技术、计算机技术

Jihoon Suh,Yeongjun Jang,Kaoru Teranishi,Takashi Tanaka.Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis[EB/OL].(2025-06-14)[2025-06-28].https://arxiv.org/abs/2506.12358.点此复制

评论