|国家预印本平台
首页|Improving Learning to Optimize Using Parameter Symmetries

Improving Learning to Optimize Using Parameter Symmetries

Improving Learning to Optimize Using Parameter Symmetries

来源:Arxiv_logoArxiv
英文摘要

We analyze a learning-to-optimize (L2O) algorithm that exploits parameter space symmetry to enhance optimization efficiency. Prior work has shown that jointly learning symmetry transformations and local updates improves meta-optimizer performance. Supporting this, our theoretical analysis demonstrates that even without identifying the optimal group element, the method locally resembles Newton's method. We further provide an example where the algorithm provably learns the correct symmetry transformation during training. To empirically evaluate L2O with teleportation, we introduce a benchmark, analyze its success and failure cases, and show that enhancements like momentum further improve performance. Our results highlight the potential of leveraging neural network parameter space symmetry to advance meta-optimization.

Bo Zhao、Rose Yu、Guy Zamir、Aryan Dokania

计算技术、计算机技术

Bo Zhao,Rose Yu,Guy Zamir,Aryan Dokania.Improving Learning to Optimize Using Parameter Symmetries[EB/OL].(2025-04-21)[2025-05-07].https://arxiv.org/abs/2504.15399.点此复制

评论