Golden Partition Zone: Rethinking Neural Network Partitioning Under Inversion Threats in Collaborative Inference
Golden Partition Zone: Rethinking Neural Network Partitioning Under Inversion Threats in Collaborative Inference
In collaborative inference, intermediate features transmitted from edge devices can be exploited by adversaries to reconstruct original inputs via model inversion attacks (MIA). While existing defenses focus on shallow layer protection, they often incur significant utility loss. A key open question is how to partition the edge cloud model to maximize resistance to MIA while minimizing accuracy degradation. We firest overturn the common belief that increasing model depth can resist MIA. Through theoretical analysis, we show that representational transitions in neural networks cause sharp changes in conditional entropy $H(x\mid z)$, intra class mean squared radius ($R_c^2$) and feature dimensionality being critical factors. Experiments on three representative deep vision models show that partitioning at the representational transition or decision level layers yields over 4 times higher mean square error compared to shallow splits, indicating significantly stronger resistance to MIA. Positive label smoothing further enhances robustness by compressing $R_c^2$ and improving generalization. We also validate the resilience of decision level features under feature and inversion model enhancements, and observe that auxiliary data types influence both transition boundaries and reconstruction behavior.
Rongke Liu、Youwen Zhu
计算技术、计算机技术
Rongke Liu,Youwen Zhu.Golden Partition Zone: Rethinking Neural Network Partitioning Under Inversion Threats in Collaborative Inference[EB/OL].(2025-06-19)[2025-07-16].https://arxiv.org/abs/2506.15412.点此复制
评论