1 code implementation • 7 Mar 2024 • Nabeel Seedat, Fergus Imrie, Mihaela van der Schaar
Additionally, we propose the Hardness Characterization Analysis Toolkit (H-CAT), which supports comprehensive and quantitative benchmarking of HCMs across the hardness taxonomy and can easily be extended to new HCMs, hardness types, and datasets.
2 code implementations • 26 Feb 2024 • Nicolas Huynh, Jeroen Berrevoets, Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar
Identification and appropriate handling of inconsistencies in data at deployment time is crucial to reliably use machine learning models.
1 code implementation • 6 Feb 2024 • Tennison Liu, Nicolás Astorga, Nabeel Seedat, Mihaela van der Schaar
Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions.
no code implementations • 19 Dec 2023 • Nabeel Seedat, Nicolas Huynh, Boris van Breugel, Mihaela van der Schaar
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem.
no code implementations • 23 Nov 2023 • Hao Sun, Alex J. Chan, Nabeel Seedat, Alihan Hüyük, Mihaela van der Schaar
On the one hand, it brings opportunities for safe policy improvement under high-stakes scenarios like clinical guidelines.
2 code implementations • NeurIPS 2023 • Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar
Data quality is crucial for robust machine learning algorithms, with the recent interest in data-centric AI emphasizing the importance of training data characterization.
2 code implementations • NeurIPS 2023 • Lasse Hansen, Nabeel Seedat, Mihaela van der Schaar, Andrija Petrovic
In an empirical study, we evaluate the performance of five state-of-the-art models for tabular data generation on eleven distinct tabular datasets.
no code implementations • 7 Jun 2023 • Elisabeth R. M. Heremans, Nabeel Seedat, Bertien Buyse, Dries Testelmans, Mihaela van der Schaar, Maarten De Vos
As machine learning becomes increasingly prevalent in critical fields such as healthcare, ensuring the safety and reliability of machine learning systems becomes paramount.
2 code implementations • 23 Feb 2023 • Nabeel Seedat, Alan Jeffares, Fergus Imrie, Mihaela van der Schaar
However, the use of self-supervision beyond model pretraining and representation learning has been largely unexplored.
no code implementations • 9 Nov 2022 • Nabeel Seedat, Fergus Imrie, Mihaela van der Schaar
However, this remains a nascent area with no standardized framework to guide practitioners to the necessary data-centric considerations or to communicate the design of data-centric driven ML systems.
2 code implementations • 24 Oct 2022 • Nabeel Seedat, Jonathan Crabbé, Ioana Bica, Mihaela van der Schaar
High model performance, on average, can hide that models may systematically underperform on subgroups of the data.
no code implementations • NeurIPS 2023 • Hao Sun, Boris van Breugel, Jonathan Crabbe, Nabeel Seedat, Mihaela van der Schaar
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
2 code implementations • 16 Jun 2022 • Nabeel Seedat, Fergus Imrie, Alexis Bellot, Zhaozhi Qian, Mihaela van der Schaar
To assess solutions to this problem, we propose a controllable simulation environment based on a model of tumor growth for a range of scenarios with irregular sampling reflective of a variety of clinical scenarios.
1 code implementation • 13 Jun 2022 • Jeroen Berrevoets, Nabeel Seedat, Fergus Imrie, Mihaela van der Schaar
Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure.
no code implementations • 29 May 2022 • Hongshu Liu, Nabeel Seedat, Julia Ive
Computational models providing accurate estimates of their uncertainty are crucial for risk management associated with decision making in healthcare contexts.
1 code implementation • 17 Feb 2022 • Nabeel Seedat, Jonathan Crabbé, Mihaela van der Schaar
These estimators can be used to evaluate the congruence of test instances with respect to the training set, to answer two practically useful questions: (1) which test instances will be reliably predicted by a model trained with the training instances?
1 code implementation • 8 Jul 2020 • Nabeel Seedat
Incorporating a human-in-the-loop system when deploying automated decision support is critical in healthcare contexts to create trust, as well as provide reliable performance on a patient-to-patient basis.
no code implementations • 22 Jun 2020 • Nabeel Seedat, Vered Aharonson, Ilana Schlesinger
The discrimination accuracy of PD from controls was 98. 2%.
no code implementations • 22 Jun 2020 • Nabeel Seedat, Vered Aharonson
A large set of features, some unique to this study are extracted and three feature selection methods are compared using a multi-class Random Forest (RF) classifier.
no code implementations • 28 Oct 2019 • Nabeel Seedat, Christopher Kanan
For many applications it is critical to know the uncertainty of a neural network's predictions.