no code implementations • 19 Apr 2023 • Nancy Lynch, Frederik Mallmann-Trenn
We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 2021), of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned.
no code implementations • 29 Jul 2020 • Emily Toomey, Ken Segall, Matteo Castellani, Marco Colangelo, Nancy Lynch, Karl K. Berggren
As the limits of traditional von Neumann computing come into view, the brain's ability to communicate vast quantities of information using low-power spikes has become an increasing source of inspiration for alternative architectures.
no code implementations • 10 Sep 2019 • Nancy Lynch, Frederik Mallmann-Trenn
Our main goal is to introduce a general framework for these tasks and prove formally how both (recognition and learning) can be achieved.
no code implementations • 25 Apr 2019 • Nancy Lynch, Cameron Musco, Merav Parter
We provide efficient constructions of WTA circuits in our stochastic spiking neural network model, as well as lower bounds in terms of the number of auxiliary neurons required to drive convergence to WTA in a given number of steps.
no code implementations • 1 Mar 2019 • Nancy Lynch, Mien Brabeeba Wang
We consider the problem of translating temporal information into spatial information in such networks, an important task that is carried out by actual brains.
no code implementations • 12 Aug 2018 • Nancy Lynch, Cameron Musco
We define two operators on SNNs: a $composition$ $operator$, which supports modeling of SNNs as combinations of smaller SNNs, and a $hiding$ $operator$, which reclassifies some output behavior of an SNN as internal.
no code implementations • 22 Feb 2018 • Lili Su, Martin Zubeldia, Nancy Lynch
We say an individual learns the best option if eventually (as $t \to \infty$) it pulls only the arm with the highest average reward.
no code implementations • 5 Jun 2017 • Nancy Lynch, Cameron Musco, Merav Parter
Randomization allows us to solve this task with a very compact network, using $O \left (\frac{\sqrt{n}\log n}{\epsilon}\right)$ auxiliary neurons, which is sublinear in the input size.
no code implementations • 6 Oct 2016 • Nancy Lynch, Cameron Musco, Merav Parter
In this paper, we focus on the important `winner-take-all' (WTA) problem, which is analogous to a neural leader election unit: a network consisting of $n$ input neurons and $n$ corresponding output neurons must converge to a state in which a single output corresponding to a firing input (the `winner') fires, while all other outputs remain silent.