deeptest24 W Essbai, A Bombarda, S Bonfanti, A Gargantini

A Framework for Including Uncertainty in Robustness Evaluation of Bayesian Neural Network Classifiers

in DeepTest 2024 (2024)

Abstract
Neural networks (NNs) play a crucial role in safety-critical fields, requiring robustness assurance. Bayesian Neural Networks (BNNs) address data uncertainty, providing probabilistic outputs. However, the literature on BNN robustness assessment is still limited, mainly focusing on adversarial examples, which are often impractical in real-world applications. This paper introduces a fresh perspective on BNN classifier robustness, considering natural input variations while accounting for prediction uncertainties. Our approach excludes predictions labeled as “unknown�, enabling practitioners to define alteration probabilities, penalize errors beyond a specified threshold, and tolerate varying error levels below it. We present a systematic approach for evaluating the robustness of BNNs, introducing new evaluation metrics that account for prediction uncertainty. We conduct a comparative study using two NNs – standard MLP and Bayesian MLP – on the MNIST dataset. Our results show that by leveraging estimated uncertainty, it is possible to enhance the system’s robustness


My sw links