Search Results for author: Haim Sompolinsky

Found 19 papers, 1 papers with code

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds

no code implementations21 Dec 2023 Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung

Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches.

Connecting NTK and NNGP: A Unified Theoretical Framework for Neural Network Learning Dynamics in the Kernel Regime

no code implementations8 Sep 2023 Yehonatan Avidan, Qianyi Li, Haim Sompolinsky

In this regime, two disparate theoretical frameworks have been used, in which the network's output is described using kernels: one framework is based on the Neural Tangent Kernel (NTK) which assumes linearized gradient descent dynamics, while the Neural Network Gaussian Process (NNGP) kernel assumes a Bayesian framework.

Globally Gated Deep Linear Networks

no code implementations31 Oct 2022 Qianyi Li, Haim Sompolinsky

The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.

L2 Regularization

A theory of learning with constrained weight-distribution

no code implementations14 Jun 2022 Weishun Zhong, Ben Sorscher, Daniel D Lee, Haim Sompolinsky

Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution.

Temporal support vectors for spiking neuronal networks

no code implementations28 May 2022 Ran Rubin, Haim Sompolinsky

However, for dynamical systems with event based outputs, such as spiking neural networks and other continuous time threshold crossing systems, this optimality criterion is inapplicable due to the strong temporal correlations in their input and output.

Optimal quadratic binding for relational reasoning in vector symbolic neural architectures

1 code implementation14 Apr 2022 Naoki Hiratani, Haim Sompolinsky

In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms.

Relational Reasoning

Soft-margin classification of object manifolds

no code implementations14 Mar 2022 Uri Cohen, Haim Sompolinsky

A neural population responding to multiple appearances of a single object defines a manifold in the neural response space.

Classification Object +1

Macroscopic Fluctuations Emerge in Balanced Networks with Incomplete Recurrent Alignment

no code implementations11 Mar 2021 Itamar Daniel Landau, Haim Sompolinsky

Finally, we define the alignment matrix as the overlap between left and right-singular vectors of the structured connectivity, and show that the singular values of the alignment matrix determine the amplitude of macroscopic variability, while its singular vectors determine the structure.

Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization

no code implementations7 Dec 2020 Qianyi Li, Haim Sompolinsky

This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity.

Predicting the Outputs of Finite Networks Trained with Noisy Gradients

no code implementations28 Sep 2020 Gadi Naveh, Oded Ben-David, Haim Sompolinsky, Zohar Ringel

A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs).

Gaussian Processes Image Classification

A new role for circuit expansion for learning in neural networks

no code implementations19 Aug 2020 Julia Steinberg, Madhu Advani, Haim Sompolinsky

We find that sparse expansion of the input of a student perceptron network both increases its capacity and improves the generalization performance of the network when learning a noisy rule from a teacher perceptron when these expansions are pruned after learning.

Classification and Geometry of General Perceptual Manifolds

no code implementations17 Oct 2017 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.

Classification General Classification +2

Learning Data Manifolds with a Cutting Plane Method

no code implementations28 May 2017 SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee

We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.

Data Augmentation

Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity

no code implementations3 May 2017 Ran Rubin, L. F. Abbott, Haim Sompolinsky

To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks.

Optimal Architectures in a Solvable Model of Deep Networks

no code implementations NeurIPS 2016 Jonathan Kadmon, Haim Sompolinsky

Deep neural networks have received a considerable attention due to the success of their training for real world machine learning applications.

Linear Readout of Object Manifolds

no code implementations6 Dec 2015 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.

Object

Short-term memory in neuronal networks through dynamical compressed sensing

no code implementations NeurIPS 2010 Surya Ganguli, Haim Sompolinsky

Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network.

Cannot find the paper you are looking for? You can Submit a new open access paper.