no code implementations • 8 Sep 2023 • Yehonatan Avidan, Qianyi Li, Haim Sompolinsky
In this regime, two disparate theoretical frameworks have been used, in which the network's output is described using kernels: one framework is based on the Neural Tangent Kernel (NTK) which assumes linearized gradient descent dynamics, while the Neural Network Gaussian Process (NNGP) kernel assumes a Bayesian framework.
no code implementations • 31 Oct 2022 • Qianyi Li, Haim Sompolinsky
The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.
no code implementations • 7 Dec 2020 • Qianyi Li, Haim Sompolinsky
This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity.
1 code implementation • NeurIPS 2020 • Qianyi Li, Cengiz Pehlevan
Excitation-inhibition (E-I) balance is ubiquitously observed in the cortex.