Efficient training of neural networks with interval uncertainty

Sadeghi, Jonathan C, and De Angelis, Marco and Patelli, Edoardo (2018) Efficient training of neural networks with interval uncertainty. In: 8th International Workshop on Reliable Computing, 2018-07-16 - 2018-07-18, Institute for Risk and Uncertainty, University of Liverpool.

[thumbnail of Sadeghi-etal-REC2018-Efficient-training-neural-networks-interval-uncertainty]
Preview
Text. Filename: Sadeghi_etal_REC2018_Efficient_training_neural_networks_interval_uncertainty.pdf
Accepted Author Manuscript
License: Strathprints license 1.0

Download (708kB)| Preview

Abstract

In this paper we attempt to build upon past work on Interval Neural Networks, and provide a robust way to train and quantify the uncertainty of Deep Neural Networks. Specifically, we propose a back propagation algorithm for Neural Networks with constant width predictions. In order to maintain numerical stability we propose minimising the maximum of the batch of errors at each step. Our approach can accommodate incertitude in the training data, and therefore adversarial examples from a commonly used attack model can be trivially accounted for. We present preliminary results on a test function example. The reliability of the predictions of these networks are guaranteed by the non-convex Scenario approach to chance constrained optimisation. A key result is that, by using minibatches of size M, the complexity of our approach scales as O(MNiter), and does not depend upon the number of training data points as with other Interval Predictor Model methods.

ORCID iDs

Sadeghi, Jonathan C,, De Angelis, Marco ORCID logoORCID: https://orcid.org/0000-0001-8851-023X and Patelli, Edoardo ORCID logoORCID: https://orcid.org/0000-0002-5007-7247;