no code implementations • 22 Mar 2022 • Thibaud Brochet, Jérôme Lapuyade-Lahorgue, Pierre Vera, Su Ruan
In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis-Havrda-Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications.
no code implementations • 12 Apr 2021 • Thibaud Brochet, Jerome Lapuyade-Lahorgue, Sebastien Bougleux, Mathieu Salaun, Su Ruan
Our method is tested on one POE dataset including 2947 distinct images, is showing better results than using Shannon entropy and behaves better with regard to the problem of overfitting.