Neuronal message passing using Mean-field, Bethe, and Marginal approximations
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Neuronal computations rely upon local interactions across synapses. For a neuronal network to perform inference, it must integrate information from locally computed messages that are propagated among elements of that network. We review the form of two popular (Bayesian) message passing schemes and consider their plausibility as descriptions of inference in biological networks. These are variational message passing and belief propagation – each of which is derived from a free energy functional that relies upon different approximations (mean-field and Bethe respectively). We begin with an overview of these schemes and illustrate the form of the messages required to perform inference using Hidden Markov Models as generative models. Throughout, we use factor graphs to show the form of the generative models and of the messages they entail. We consider how these messages might manifest neuronally and simulate the inferences they perform. While variational message passing offers a simple and neuronally plausible architecture, it falls short of the inferential performance of belief propagation. In contrast, belief propagation allows exact computation of marginal posteriors at the expense of the architectural simplicity of variational message passing. As a compromise between these two extremes, we offer a third approach – marginal message passing – that features a simple architecture, while approximating the performance of belief propagation. Finally, we link formal considerations to accounts of neurological and psychiatric syndromes in terms of aberrant message passing.
Details
Original language | English |
---|---|
Pages (from-to) | 509-533 |
Number of pages | 25 |
Journal | Scientific Reports |
Volume | 9 |
Issue number | 1 |
Publication status | Published - 2019 |
Peer-reviewed | Yes |
External IDs
Scopus | 85061589358 |
---|