Search Results for author: Giovanni Cinà

Found 10 papers, 7 papers with code

Semantic match: Debugging feature attribution methods in XAI for healthcare

no code implementations5 Jan 2023 Giovanni Cinà, Tabea E. Röber, Rob Goedhart, Ş. İlker Birbil

Despite valid concerns, we argue that existing criticism on the viability of post-hoc local explainability methods throws away the baby with the bathwater by generalizing a problem that is specific to image data.

Explainable Artificial Intelligence (XAI) Feature Importance +1

Why we do need Explainable AI for Healthcare

no code implementations30 Jun 2022 Giovanni Cinà, Tabea Röber, Rob Goedhart, Ilker Birbil

The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology.

Specificity valid

Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation

1 code implementation30 Sep 2021 Karina Zadorozhny, Patrick Thoral, Paul Elbers, Giovanni Cinà

Detection of Out-of-Distribution (OOD) samples in real time is a crucial safety check for deployment of machine learning models in the medical field.

BIG-bench Machine Learning Out-of-Distribution Detection +2

Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection

1 code implementation9 Dec 2020 Dennis Ulmer, Giovanni Cinà

A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform.

General Classification Out of Distribution (OOD) Detection

Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data

1 code implementation6 Nov 2020 Dennis Ulmer, Lotta Meijerink, Giovanni Cinà

When deploying machine learning models in high-stakes real-world environments such as health care, it is crucial to accurately assess the uncertainty concerning a model's prediction on abnormal inputs.

Out of Distribution (OOD) Detection

Uncertainty estimation for classification and risk prediction on medical tabular data

2 code implementations13 Apr 2020 Lotta Meijerink, Giovanni Cinà, Michele Tonutti

In a data-scarce field such as healthcare, where models often deliver predictions on patients with rare conditions, the ability to measure the uncertainty of a model's prediction could potentially lead to improved effectiveness of decision support tools and increased user trust.

General Classification

Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications

1 code implementation20 Jun 2019 David Ruhe, Giovanni Cinà, Michele Tonutti, Daan de Bruin, Paul Elbers

In this work we show how Bayesian modelling and the predictive uncertainty that it provides can be used to mitigate risk of misguided prediction and to detect out-of-domain examples in a medical setting.

BIG-bench Machine Learning Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.