1 code implementation • 30 Apr 2024 • Ge Yan, Yaniv Romano, Tsui-Wei Weng
To address these limitations, we first propose a novel framework called RSCP+ to provide provable robustness guarantee in evaluation, which fixes the issues in the original RSCP method.
1 code implementation • 1 Feb 2024 • Liran Ringel, Regev Cohen, Daniel Freedman, Michael Elad, Yaniv Romano
This data-driven rule attains finite-sample, distribution-free control of the accuracy gap between full and early-time classification.
1 code implementation • 5 Jun 2023 • Margaux Zaffran, Aymeric Dieuleveut, Julie Josse, Yaniv Romano
This motivates our novel generalized conformalized quantile regression framework, missing data augmentation, which yields prediction intervals that are valid conditionally to the patterns of missing values, despite their exponential number.
1 code implementation • 17 May 2023 • Omer Belhasin, Yaniv Romano, Daniel Freedman, Ehud Rivlin, Michael Elad
Uncertainty quantification for inverse problems in imaging has drawn much attention lately.
1 code implementation • 14 Feb 2023 • Hod Wirzberger, Assaf Kalinski, Idan Meirzada, Harel Primack, Yaniv Romano, Chene Tradonsky, Ruti Ben Shlomi
Maximum 2-satisfiability (MAX-2-SAT) is a type of combinatorial decision problem that is known to be NP-hard.
1 code implementation • NeurIPS 2023 • Meshi Bashari, Amir Epstein, Yaniv Romano, Matteo Sesia
Conformal inference provides a general distribution-free method to rigorously calibrate the output of any machine learning algorithm for novelty detection.
1 code implementation • 1 Oct 2022 • Shalev Shaer, Gal Maman, Yaniv Romano
Our test can work with any sophisticated machine learning algorithm to enhance data efficiency to the extent possible.
no code implementations • 28 Sep 2022 • Shai Feldman, Bat-Sheva Einbinder, Stephen Bates, Anastasios N. Angelopoulos, Asaf Gendler, Yaniv Romano
In such cases, we can also correct for noise of bounded size in the conformal prediction algorithm in order to ensure achieving the correct risk of the ground truth labels without score or data regularity.
no code implementations • 8 Sep 2022 • Yaniv Romano, Harel Primack, Talya Vaknin, Idan Meirzada, Ilan Karpas, Dov Furman, Chene Tradonsky, Ruti Ben Shlomi
The ultimate goal of any sparse coding method is to accurately recover from a few noisy linear measurements, an unknown sparse vector.
1 code implementation • 20 Jul 2022 • Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates, Yaniv Romano, Phillip Isola
Meaningful uncertainty quantification in computer vision requires reasoning about semantic information -- say, the hair color of the person in a photo or the location of a car on the street.
1 code implementation • 14 Jul 2022 • Jacopo Teneggi, Beepul Bharti, Yaniv Romano, Jeremias Sulam
As a result, we further our understanding of Shapley-based explanation methods from a novel perspective and characterize the conditions under which one can make statistically valid claims about feature importance via the Shapley value.
no code implementations • 3 Jul 2022 • Shalev Shaer, Yaniv Romano
This is done by introducing a new cost function that aims at maximizing the test statistic used to measure violations of conditional independence.
1 code implementation • 2 Jun 2022 • Nitai Fingerhut, Matteo Sesia, Yaniv Romano
Double machine learning is a statistical method for leveraging complex black-box models to construct approximately unbiased treatment effect estimates given observational data with high-dimensional covariates, under the assumption of a partially linear model.
1 code implementation • 30 May 2022 • Aviv A. Rosenberg, Sanketh Vedula, Yaniv Romano, Alex M. Bronstein
Despite its elegance, VQR is arguably not applicable in practice due to several limitations: (i) it assumes a linear model for the quantiles of the target $\boldsymbol{\mathrm{Y}}$ given the features $\boldsymbol{\mathrm{X}}$; (ii) its exact formulation is intractable even for modestly-sized problems in terms of target dimensions, number of regressed quantile levels, or number of features, and its relaxed dual formulation may violate the monotonicity of the estimated quantiles; (iii) no fast or scalable solvers for VQR currently exist.
1 code implementation • 18 May 2022 • Shai Feldman, Liran Ringel, Stephen Bates, Yaniv Romano
To provide rigorous uncertainty quantification for online learning models, we develop a framework for constructing uncertainty sets that provably control risk -- such as coverage of confidence intervals, false negative rate, or F1 score -- in the online setting.
1 code implementation • 12 May 2022 • Bat-Sheva Einbinder, Yaniv Romano, Matteo Sesia, Yanfei Zhou
Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities.
2 code implementations • 10 Feb 2022 • Anastasios N Angelopoulos, Amit P Kohli, Stephen Bates, Michael I Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, Yaniv Romano
Image-to-image regression is an important learning task, used frequently in biological imaging.
1 code implementation • 28 Oct 2021 • Meyer Scetbon, Laurent Meunier, Yaniv Romano
We propose a new conditional dependence measure and a statistical test for conditional independence.
1 code implementation • 2 Oct 2021 • Shai Feldman, Stephen Bates, Yaniv Romano
We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability.
no code implementations • ICLR 2022 • Asaf Gendler, Tsui-Wei Weng, Luca Daniel, Yaniv Romano
By combining conformal prediction with randomized smoothing, our proposed method forms a prediction set with finite-sample coverage guarantee that holds for any data distribution with $\ell_2$-norm bounded adversarial noise, generated by any adversarial attack algorithm.
1 code implementation • NeurIPS 2021 • Shai Feldman, Stephen Bates, Yaniv Romano
To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event.
1 code implementation • NeurIPS 2021 • Matteo Sesia, Yaniv Romano
This paper develops a conformal method to compute prediction intervals for non-parametric regression that can automatically adapt to skewed data.
1 code implementation • 16 Apr 2021 • Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia
We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.
1 code implementation • NeurIPS 2020 • Yaniv Romano, Stephen Bates, Emmanuel J. Candès
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
2 code implementations • NeurIPS 2020 • Yaniv Romano, Matteo Sesia, Emmanuel J. Candès
Conformal inference, cross-validation+, and the jackknife+ are hold-out methods that can be combined with virtually any machine learning algorithm to construct prediction sets with guaranteed marginal coverage.
1 code implementation • 15 Aug 2019 • Yaniv Romano, Rina Foygel Barber, Chiara Sabatti, Emmanuel J. Candès
An important factor to guarantee a fair use of data-driven recommendation systems is that we should be able to communicate their uncertainty to decision makers.
4 code implementations • NeurIPS 2019 • Yaniv Romano, Evan Patterson, Emmanuel J. Candès
Conformal prediction is a technique for constructing prediction intervals that attain valid coverage in finite samples, without making distributional assumptions.
4 code implementations • 16 Nov 2018 • Yaniv Romano, Matteo Sesia, Emmanuel J. Candès
This paper introduces a machine for sampling approximate model-X knockoffs for arbitrary and unspecified data distributions using deep generative models.
no code implementations • 26 Jun 2018 • Dror Simon, Jeremias Sulam, Yaniv Romano, Yue M. Lu, Michael Elad
The proposed method adds controlled noise to the input and estimates a sparse representation from the perturbed signal.
no code implementations • 29 May 2018 • Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad
Despite their impressive performance, deep convolutional neural networks (CNNs) have been shown to be sensitive to small adversarial perturbations.
1 code implementation • 6 May 2018 • Tao Hong, Yaniv Romano, Michael Elad
Models play an important role in inverse problems, serving as the prior for representing the original signal to be recovered.
no code implementations • 29 Aug 2017 • Jeremias Sulam, Vardan Papyan, Yaniv Romano, Michael Elad
We show that the training of the filters is essential to allow for non-trivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
1 code implementation • ICCV 2017 • Vardan Papyan, Yaniv Romano, Jeremias Sulam, Michael Elad
Convolutional Sparse Coding (CSC) is an increasingly popular model in the signal and image processing communities, tackling some of the limitations of traditional patch-based sparse representations.
no code implementations • 11 Feb 2017 • Dmitry Batenkov, Yaniv Romano, Michael Elad
The traditional sparse modeling approach, when applied to inverse problems with large data such as images, essentially assumes a sparse model for small overlapping data patches.
2 code implementations • 9 Nov 2016 • Yaniv Romano, Michael Elad, Peyman Milanfar
As opposed to the $P^3$ method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem.
no code implementations • 23 Sep 2016 • Yi Ren, Yaniv Romano, Michael Elad
Image and texture synthesis is a challenging task that has long been drawing attention in the fields of image processing, graphics, and machine learning.
no code implementations • 27 Jul 2016 • Vardan Papyan, Yaniv Romano, Michael Elad
This is shown to be tightly connected to CNN, so much so that the forward pass of the CNN is in fact the thresholding pursuit serving the ML-CSC model.
no code implementations • 3 Jun 2016 • Yaniv Romano, John Isidoro, Peyman Milanfar
Our approach additionally includes an extremely efficient way to produce an image that is significantly sharper than the input blurry one, without introducing artifacts such as halos and noise amplification.
no code implementations • 22 Mar 2016 • Yaniv Romano, Michael Elad
Therefore, with a minor increase of the dimensions (e. g. with additional 10 values to the patch representation), we implicitly/softly describe the information of a large patch.
no code implementations • 22 Feb 2015 • Yaniv Romano, Michael Elad
In this paper we propose a generic recursive algorithm for improving image denoising methods.