|国家预印本平台
首页|In situ fine-tuning of in silico trained Optical Neural Networks

In situ fine-tuning of in silico trained Optical Neural Networks

In situ fine-tuning of in silico trained Optical Neural Networks

来源:Arxiv_logoArxiv
英文摘要

Optical Neural Networks (ONNs) promise significant advantages over traditional electronic neural networks, including ultrafast computation, high bandwidth, and low energy consumption, by leveraging the intrinsic capabilities of photonics. However, training ONNs poses unique challenges, notably the reliance on simplified in silico models whose trained parameters must subsequently be mapped to physical hardware. This process often introduces inaccuracies due to discrepancies between the idealized digital model and the physical ONN implementation, particularly stemming from noise and fabrication imperfections. In this paper, we analyze how noise misspecification during in silico training impacts ONN performance and we introduce Gradient-Informed Fine-Tuning (GIFT), a lightweight algorithm designed to mitigate this performance degradation. GIFT uses gradient information derived from the noise structure of the ONN to adapt pretrained parameters directly in situ, without requiring expensive retraining or complex experimental setups. GIFT comes with formal conditions under which it improves ONN performance. We also demonstrate the effectiveness of GIFT via simulation on a five-layer feed forward ONN trained on the MNIST digit classification task. GIFT achieves up to $28\%$ relative accuracy improvement compared to the baseline performance under noise misspecification, without resorting to costly retraining. Overall, GIFT provides a practical solution for bridging the gap between simplified digital models and real-world ONN implementations.

Gianluca Kosmella、Ripalta Stabile、Jaron Sanders

光电子技术计算技术、计算机技术

Gianluca Kosmella,Ripalta Stabile,Jaron Sanders.In situ fine-tuning of in silico trained Optical Neural Networks[EB/OL].(2025-06-27)[2025-07-21].https://arxiv.org/abs/2506.22122.点此复制

评论