|国家预印本平台
首页|Joint Quantization and Pruning Neural Networks Approach: A Case Study on FSO Receivers

Joint Quantization and Pruning Neural Networks Approach: A Case Study on FSO Receivers

Joint Quantization and Pruning Neural Networks Approach: A Case Study on FSO Receivers

来源:Arxiv_logoArxiv
英文摘要

Towards fast, hardware-efficient, and low-complexity receivers, we propose a compression-aware learning approach and examine it on free-space optical (FSO) receivers for turbulence mitigation. The learning approach jointly quantize, prune, and train a convolutional neural network (CNN). In addition, we propose to have the CNN weights of power of two values so we replace the multiplication operations bit-shifting operations in every layer that has significant lower computational cost. The compression idea in the proposed approach is that the loss function is updated and both the quantization levels and the pruning limits are optimized in every epoch of training. The compressed CNN is examined for two levels of compression (1-bit and 2-bits) over different FSO systems. The numerical results show that the compression approach provides negligible decrease in performance in case of 1-bit quantization and the same performance in case of 2-bits quantization, compared to the full-precision CNNs. In general, the proposed IM/DD FSO receivers show better bit-error rate (BER) performance (without the need for channel state information (CSI)) compared to the maximum likelihood (ML) receivers that utilize imperfect CSI when the DL model is compressed whether with 1-bit or 2-bit quantization.

Ming Jian、Mohanad Obeed

光电子技术通信无线通信

Ming Jian,Mohanad Obeed.Joint Quantization and Pruning Neural Networks Approach: A Case Study on FSO Receivers[EB/OL].(2025-06-25)[2025-07-25].https://arxiv.org/abs/2506.20084.点此复制

评论