TY - JOUR
T1 - Blessing of dimensionality
T2 - Mathematical foundations of the statistical physics of data
AU - Gorban, A. N.
AU - Tyukin, I. Y.
N1 - Funding Information:
Data accessibility. This article has no additional data. Authors’ contributions. Both authors made substantial contributions to the conception, proof of the theorems, analysis of applications, drafting the article, revising it critically and the final approval of the version to be published. Competing interests. The authors declare that they have no competing interests. Funding. This work was supported by Innovate UK grant nos KTP009890 and KTP010522. I.Y.T. was supported by the Russian Ministry of Education and Science, projects 8.2080.2017/4.6 (assessment and computational support for knowledge transfer algorithms between AI systems) and 2.6553.2017/BCH Basic Part.
Publisher Copyright:
© 2018 The Author(s) Published by the Royal Society. All rights reserved.
PY - 2018/4/28
Y1 - 2018/4/28
N2 - The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Nondestructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.
AB - The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Nondestructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.
KW - Ensemble equivalence
KW - Extreme points
KW - Fisher's discriminant
KW - Linear separability
KW - Measure concentration
UR - http://www.scopus.com/inward/record.url?scp=85045541538&partnerID=8YFLogxK
U2 - 10.1098/rsta.2017.0237
DO - 10.1098/rsta.2017.0237
M3 - Review article
C2 - 29555807
AN - SCOPUS:85045541538
SN - 1364-503X
VL - 376
JO - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
JF - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
IS - 2118
M1 - 0237
ER -