The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin*, Qinghua Zhou

*Corresponding author for this work

Research output: Contribution to conference typesPaperpeer-review

4 Citations (Scopus)

Abstract

In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.

Original languageEnglish
Pages530-541
Number of pages12
DOIs
Publication statusE-pub ahead of print - 22 Sept 2023
Event32nd International Conference on Artificial Neural Networks, ICANN 2023 - Heraklion, Greece
Duration: 26 Sept 202329 Sept 2023

Conference

Conference32nd International Conference on Artificial Neural Networks, ICANN 2023
Country/TerritoryGreece
CityHeraklion
Period26/09/202329/09/2023

Keywords

  • AI robustness
  • AI stability
  • AI verifiability
  • deep learning

Fingerprint

Dive into the research topics of 'The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning'. Together they form a unique fingerprint.

Cite this