TY - JOUR
T1 - Multisample Online Learning for Probabilistic Spiking Neural Networks
AU - Jang, Hyeryung
AU - Simeone, Osvaldo
N1 - Funding Information:
This work was supported in part by the European Research Council (ERC) through the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement 725731, in part by Intel Labs through the Intel's Neuromorphic Research Community (INRC) Programme, and in part by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) under Grant 2021R1F1A1063288.
Publisher Copyright:
© 2012 IEEE.
PY - 2022/5/1
Y1 - 2022/5/1
N2 - Spiking neural networks (SNNs) capture some of the efficiency of biological brains for inference and learning via the dynamic, online, and event-driven processing of binary time series. Most existing learning algorithms for SNNs are based on deterministic neuronal models, such as leaky integrate-and-fire, and rely on heuristic approximations of backpropagation through time that enforces constraints such as locality. In contrast, probabilistic SNN models can be trained directly via principled online, local, and update rules that have proven to be particularly effective for resource-constrained systems. This article investigates another advantage of probabilistic SNNs, namely, their capacity to generate independent outputs when queried over the same input. It is shown that the multiple generated output samples can be used during inference to robustify decisions and to quantify uncertainty--a feature that deterministic SNN models cannot provide. Furthermore, they can be leveraged for training in order to obtain more accurate statistical estimates of the log-loss training criterion and its gradient. Specifically, this article introduces an online learning rule based on generalized expectation-maximization (GEM) that follows a three-factor form with global learning signals and is referred to as GEM-SNN. Experimental results on structured output memorization and classification on a standard neuromorphic dataset demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of samples used for inference and training.
AB - Spiking neural networks (SNNs) capture some of the efficiency of biological brains for inference and learning via the dynamic, online, and event-driven processing of binary time series. Most existing learning algorithms for SNNs are based on deterministic neuronal models, such as leaky integrate-and-fire, and rely on heuristic approximations of backpropagation through time that enforces constraints such as locality. In contrast, probabilistic SNN models can be trained directly via principled online, local, and update rules that have proven to be particularly effective for resource-constrained systems. This article investigates another advantage of probabilistic SNNs, namely, their capacity to generate independent outputs when queried over the same input. It is shown that the multiple generated output samples can be used during inference to robustify decisions and to quantify uncertainty--a feature that deterministic SNN models cannot provide. Furthermore, they can be leveraged for training in order to obtain more accurate statistical estimates of the log-loss training criterion and its gradient. Specifically, this article introduces an online learning rule based on generalized expectation-maximization (GEM) that follows a three-factor form with global learning signals and is referred to as GEM-SNN. Experimental results on structured output memorization and classification on a standard neuromorphic dataset demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of samples used for inference and training.
KW - Biological neural networks
KW - Membrane potentials
KW - Neuromorphic computing
KW - Neurons
KW - Probabilistic logic
KW - probabilistic models
KW - spiking neural networks (SNNs)
KW - Task analysis
KW - Training
KW - Uncertainty
KW - variational learning.
UR - http://www.scopus.com/inward/record.url?scp=85124105092&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3144296
DO - 10.1109/TNNLS.2022.3144296
M3 - Article
AN - SCOPUS:85124105092
SN - 2162-237X
VL - 33
SP - 2034
EP - 2044
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -