no code implementations • 1 Aug 2023 • Elena Agliari, Francesco Alemanno, Miriam Aquaro, Alberto Fachechi
In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function.
no code implementations • 26 Apr 2023 • Francesco Alemanno, Luca Camanzi, Gianluca Manzan, Daniele Tantari
While Hopfield networks are known as paradigmatic models for memory storage and retrieval, modern artificial intelligence systems mainly stand on the machine learning paradigm.
no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi
We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.
no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi
We consider dense, associative neural-networks trained by a teacher (i. e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.
no code implementations • 17 Apr 2022 • Miriam Aquaro, Francesco Alemanno, Ido Kanter, Fabrizio Durante, Elena Agliari, Adriano Barra
The gap between the huge volumes of data needed to train artificial neural networks and the relatively small amount of data needed by their biological counterparts is a central puzzle in machine learning.
no code implementations • 2 Mar 2022 • Francesco Alemanno, Miriam Aquaro, Ido Kanter, Adriano Barra, Elena Agliari
In neural network's Literature, Hebbian learning traditionally refers to the procedure by which the Hopfield model and its generalizations store archetypes (i. e., definite patterns that are experienced just once to form the synaptic matrix).
no code implementations • 1 Sep 2021 • Elena Agliari, Francesco Alemanno, Adriano Barra, Giordano De Marzo
We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable ``archetypes'' and we show that there exists a critical sample size beyond which the RBM can learn archetypes, namely the machine can successfully play as a generative model or as a classifier, according to the operational routine.
no code implementations • 2 Dec 2019 • Francesco Alemanno, Martino Centonze, Alberto Fachechi
Recently, Hopfield and Krotov introduced the concept of {\em dense associative memories} [DAM] (close to spin-glasses with $P$-wise interactions in a disordered statistical mechanical jargon): they proved a number of remarkable features these networks share and suggested their use to (partially) explain the success of the new generation of Artificial Intelligence.
no code implementations • 28 Nov 2019 • Elena Agliari, Francesco Alemanno, Adriano Barra, Martino Centonze, Alberto Fachechi
We consider a three-layer Sejnowski machine and show that features learnt via contrastive divergence have a dual representation as patterns in a dense associative memory of order P=4.
no code implementations • 21 Dec 2018 • Elena Agliari, Francesco Alemanno, Adriano Barra, Alberto Fachechi
Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model).