no code implementations • 17 Apr 2024 • Muhammad Zawish, Paul Albert, Flavio Esposito, Steven Davy, Lizy Abraham
We report that although pruned networks are accurate on controlled, high-quality images of the grass, they struggle to generalize to real-world smartphone images that are blurry or taken from challenging angles.
1 code implementation • 22 Jan 2023 • Tarun Krishna, Ayush K Rai, Alexandru Drimbarean, Eric Arazo, Paul Albert, Alan F Smeaton, Kevin McGuinness, Noel E O'Connor
Computationally expensive training strategies make self-supervised learning (SSL) impractical for resource constrained industrial settings.
2 code implementations • 10 Oct 2022 • Paul Albert, Eric Arazo, Tarun Krishna, Noel E. O'Connor, Kevin McGuinness
Experiments demonstrate the state-of-the-art performance of our Pseudo-Loss Selection (PLS) algorithm on a variety of benchmark datasets including curated data synthetically corrupted with in-distribution and out-of-distribution noise, and two real world web noise datasets.
1 code implementation • 4 Jul 2022 • Paul Albert, Eric Arazo, Noel E. O'Connor, Kevin McGuinness
These noisy samples have been evidenced by previous works to be a mixture of in-distribution (ID) samples, assigned to the incorrect category but presenting similar visual semantics to other classes in the dataset, and out-of-distribution (OOD) images, which share no semantic correlation with any category from the dataset.
no code implementations • 20 Apr 2022 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee, Deirdre Hennessy, Aisling H. O'Connor, Noel E. O'Connor, Kevin McGuinness
Sward species composition estimation is a tedious one.
1 code implementation • 18 Apr 2022 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Jaime Fernandez, Brian Mac Namee, Deirdre Hennessey, Noel E. O'Connor, Kevin McGuinness
In this context, deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation, which involves the destructive process of cutting a sample from the herbage field and sorting by hand all plant species in the herbage.
1 code implementation • 27 Oct 2021 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation; e. g. when reducing the budget to a 30% in CIFAR-10/100, RICAP data augmentation maintains accuracy, while importance sampling does not.
no code implementations • 26 Oct 2021 • Paul Albert, Diego Ortego, Eric Arazo, Noel O'Connor, Kevin McGuinness
We propose a simple solution to bridge the gap with a fully clean dataset using Dynamic Softening of Out-of-distribution Samples (DSOS), which we design on corrupted versions of the CIFAR-100 dataset, and compare against state-of-the-art algorithms on the web noise perturbated MiniImageNet and Stanford datasets and on real label noise datasets: WebVision 1. 0 and Clothing1M.
no code implementations • 26 Oct 2021 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee, Deirdre Hennessy, Aisling O'Connor, Noel O'Connor, Kevin McGuinness
Deep learning for computer vision is a powerful tool in this context as it can accurately estimate the dry biomass of a herbage parcel using images of the grass canopy taken using a portable device.
no code implementations • 8 Jan 2021 • Badri Narayanan, Mohamed Saadeldin, Paul Albert, Kevin McGuinness, Brian Mac Namee
In this paper, we demonstrate that applying data augmentation and transfer learning is effective in predicting multi-target biomass percentages of different plant species, even with a small training dataset.
no code implementations • 1 Jan 2021 • Eric Arazo, Diego Ortego, Paul Albert, Noel O'Connor, Kevin McGuinness
For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation.
1 code implementation • CVPR 2021 • Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness
We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.
Ranked #21 on Image Classification on mini WebVision 1.0
1 code implementation • 23 Jul 2020 • Paul Albert, Diego Ortego, Eric Arazo, Noel E. O'Connor, Kevin McGuinness
We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.
1 code implementation • 18 Dec 2019 • Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness
However, we show that different noise distributions make the application of this trick less straightforward and propose to continuously relabel all images to reveal a discriminative loss against multiple distributions.
4 code implementations • 8 Aug 2019 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.
2 code implementations • 25 Apr 2019 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss).
Ranked #44 on Image Classification on Clothing1M