Marvels and pitfalls of the Langevin algorithm in noisy high-dimensional inference

Chiara Cammarota, Stefano Sarao, Giulio Biroli, Florent Krzakala, Lenka Zdeborova, Pierfrancesco Urbani

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)
107 Downloads (Pure)

Abstract

Gradient-descent-based algorithms and their stochastic versions have widespread applications in machine learning and statistical inference. In this work, we carry out an analytic study of the performance of the algorithm most commonly considered in physics, the Langevin algorithm, in the context of noisy high-dimensional inference. We employ the Langevin algorithm to sample the posterior probability measure for the spiked mixed matrix-tensor model. The typical behavior of this algorithm is described by a system of integrodifferential equations that we call the Langevin state evolution, whose solution is compared with the one of the state evolution of approximate message passing (AMP). Our results show that, remarkably, the algorithmic threshold of the Langevin algorithm is suboptimal with respect to the one given by AMP. This phenomenon is due to the residual glassiness present in that region of parameters. We also present a simple heuristic expression of the transition line, which appears to be in agreement with the numerical results.

Original languageEnglish
Article number011057
Pages (from-to)011057-1-011057-41
JournalPhysical Review X
Volume10
Issue number1
Early online date5 Mar 2020
DOIs
Publication statusPublished - Mar 2020

Fingerprint

Dive into the research topics of 'Marvels and pitfalls of the Langevin algorithm in noisy high-dimensional inference'. Together they form a unique fingerprint.

Cite this