Bayesian inference is facilitated by modular neural networks with different time scales

21 Oct 2022  ·  Kohei Ichikawa, Kunihiko Kaneko ·

Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference, the prior distribution must be shaped by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. In this study, we demonstrated that the neural networks with modular structures including fast and slow modules effectively represented the prior distribution in performing accurate Bayesian inferences. Using a recurrent neural network consisting of a main module connected with input and output layers and a sub-module connected only with the main module and having slower neural activity, we demonstrated that the modular network with distinct time scales performed more accurate Bayesian inference compared with the neural networks with uniform time scales. Prior information was represented selectively by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure spontaneously emerged as a result of learning wherein prior information was selectively represented in the slower sub-module. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here