Rethinking supervised learning: insights from biological learning and from calling it by its name

NeurIPS 2021  ·  Alex Hernández-García ·

The renaissance of artificial neural networks was catalysed by the success of classification models, tagged by the community with the broader term supervised learning. The extraordinary results gave rise to a hype loaded with ambitious promises and overstatements. Soon the community realised that the success owed much to the availability of thousands of labelled examples. And supervised learning went, for many, from glory to shame. Some criticised deep learning as a whole and others proclaimed that the way forward had to be "alternatives" to supervised learning: predictive, unsupervised, semi-supervised and, more recently, self-supervised learning. However, these seem all brand names, rather than actual categories of a theoretically grounded taxonomy. Moreover, the call to banish supervised learning was motivated by the questionable claim that humans learn with little or no supervision. Here, we review insights about learning and supervision in nature, revisit the notion that learning is not possible without supervision and argue that we will make better progress if we just call it by its name.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here