|国家预印本平台
首页|Set-Based Training for Neural Network Verification

Set-Based Training for Neural Network Verification

Set-Based Training for Neural Network Verification

来源:Arxiv_logoArxiv
英文摘要

Neural networks are vulnerable to adversarial attacks, i.e., small input perturbations can significantly affect the outputs of a neural network. Therefore, to ensure safety of neural networks in safety-critical environments, the robustness of a neural network must be formally verified against input perturbations, e.g., from noisy sensors. To improve the robustness of neural networks and thus simplify the formal verification, we present a novel set-based training procedure in which we compute the set of possible outputs given the set of possible inputs and compute for the first time a gradient set, i.e., each possible output has a different gradient. Therefore, we can directly reduce the size of the output enclosure by choosing gradients toward its center. Small output enclosures increase the robustness of a neural network and, at the same time, simplify its formal verification. The latter benefit is due to the fact that a larger size of propagated sets increases the conservatism of most verification methods. Our extensive evaluation demonstrates that set-based training produces robust neural networks with competitive performance, which can be verified using fast (polynomial-time) verification algorithms due to the reduced output set.

Lukas Koller、Tobias Ladner、Matthias Althoff

计算技术、计算机技术

Lukas Koller,Tobias Ladner,Matthias Althoff.Set-Based Training for Neural Network Verification[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2401.14961.点此复制

评论