1 code implementation • ICML 2020 • Sivan Sabato, Elad Yom-Tov
We consider the study of a classification model whose properties are impossible to estimate using a validation set, either due to the absence of such a set or because access to the classifier, even as a black-box, is impossible.
no code implementations • 2 Apr 2024 • Shlomi Weitzman, Sivan Sabato
Our approximation guarantees simultaneously support the maximal gain ratio as well as near-submodular utility functions, and include both maximization under a cardinality constraint and a minimum cost coverage guarantee.
no code implementations • 5 Mar 2023 • Michal Sharoni, Sivan Sabato
We provide a counter example to a claim made in that work regarding the VC dimension of the loss class induced by this problem; We conclude that the claim is incorrect.
no code implementations • 8 Sep 2022 • Sivan Sabato
In this work, we provide new robust interactive learning algorithms for the Discriminative Feature Feedback model, with mistake bounds that are significantly lower than those of previous robust algorithms for this setting.
1 code implementation • 7 Jun 2022 • Sivan Sabato, Eran Treister, Elad Yom-Tov
We propose a new interpretable measure of unfairness, that allows providing a quantitative analysis of classifier fairness, beyond a dichotomous fair/unfair distinction.
1 code implementation • 31 Jan 2022 • Tom Hess, Ron Visbord, Sivan Sabato
Our algorithm guarantees a cost approximation factor and a number of communication rounds that depend only on the computational capacity of the coordinator.
1 code implementation • 8 Dec 2021 • Noa Ben-David, Sivan Sabato
We provide sample complexity guarantees for our algorithm, and demonstrate in experiments its usefulness on large problems, whereas previous algorithms are impractical to run on problems of even a few dozen arms.
1 code implementation • 25 Mar 2021 • Noa Ben-David, Sivan Sabato
We show that for a class of distributions that we term stable, a sample complexity reduction of up to a factor of $\widetilde{\Omega}(d^3)$ can be obtained, where $d$ is the number of network variables.
no code implementations • NeurIPS 2021 • Tom Hess, Michal Moshkovitz, Sivan Sabato
We give the first algorithm for this setting that obtains a constant approximation factor on the optimal risk under a random arrival order, an exponential improvement over previous work.
1 code implementation • 13 Dec 2020 • Shachar Schnapp, Sivan Sabato
We study active feature selection, a novel feature selection setting in which unlabeled data is available, but the budget for labels is limited, and the examples to label can be actively selected by the algorithm.
no code implementations • 24 Jun 2020 • Nadav Barak, Sivan Sabato
The algorithm finds a reweighting of the data set that approximates the weights according to the target distribution, using a limited number of weight queries.
no code implementations • 9 Mar 2020 • Sanjoy Dasgupta, Sivan Sabato
We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting.
no code implementations • NeurIPS 2019 • Sivan Sabato
We study epsilon-best-arm identification, in a setting where during the exploration phase, the cost of each arm pull is proportional to the expected future reward of that arm.
no code implementations • 24 Jun 2019 • Steve Hanneke, Aryeh Kontorovich, Sivan Sabato, Roi Weiss
This is the first learning algorithm known to enjoy this property; by comparison, the $k$-NN classifier and its variants are not generally universally Bayes-consistent, except under additional structural assumptions, such as an inner product, a norm, finite dimension, or a Besicovitch-type property.
1 code implementation • 30 May 2019 • Tom Hess, Sivan Sabato
We provide an efficient algorithm for this setting, and show that its multiplicative approximation factor is twice the approximation factor of an efficient offline algorithm.
no code implementations • NeurIPS 2018 • Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, Sivan Sabato
We consider the problem of learning a multi-class classifier from labels as well as simple explanations that we call "discriminative features".
no code implementations • ICLR 2018 • Gil Keren, Sivan Sabato, Björn Schuller
In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle.
2 code implementations • 29 May 2017 • Gil Keren, Sivan Sabato, Björn Schuller
Our experiments show that indeed in almost all cases, losses that are aligned with the Principle of Logit Separation obtain at least 20% relative accuracy improvement in the SLC task compared to losses that are not aligned with it, and sometimes considerably more.
1 code implementation • 29 May 2017 • Eyal Gutflaish, Aryeh Kontorovich, Sivan Sabato, Ofer Biller, Oded Sofer
We learn a low-rank stationary model from the training data, and then fit a regression model for predicting the expected likelihood score of normal access patterns in the future.
no code implementations • NeurIPS 2017 • Aryeh Kontorovich, Sivan Sabato, Roi Weiss
We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm.
no code implementations • 23 Nov 2016 • Gil Keren, Sivan Sabato, Björn Schuller
We propose incorporating this idea of tunable sensitivity for hard examples in neural network learning, using a new generalization of the cross-entropy gradient step, which can be used in place of the gradient in any gradient-based training method.
no code implementations • NeurIPS 2016 • Aryeh Kontorovich, Sivan Sabato, Ruth Urner
We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier.
no code implementations • 23 Feb 2016 • Sivan Sabato
We further show that in both settings, the approximation factor of this greedy algorithm is near-optimal among all greedy algorithms.
no code implementations • 2 Feb 2016 • Sivan Sabato, Tom Hess
We consider interactive algorithms in the pool-based setting, and in the stream-based setting.
no code implementations • NeurIPS 2014 • Sivan Sabato, Remi Munos
We propose a new active learning algorithm for parametric linear regression with random design.
no code implementations • 13 Aug 2013 • Amit Daniely, Sivan Sabato, Shai Ben-David, Shai Shalev-Shwartz
We study the sample complexity of multiclass prediction in several learning settings.
no code implementations • 7 Jul 2013 • Daniel Hsu, Sivan Sabato
This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments.
no code implementations • NeurIPS 2013 • Sivan Sabato, Anand D. Sarwate, Nathan Srebro
We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error.
no code implementations • 18 Feb 2013 • Sivan Sabato, Adam Kalai
When dealing with subjective, noisy, or otherwise nebulous features, the "wisdom of crowds" suggests that one may benefit from multiple judgments of the same feature on the same object.
no code implementations • 13 Dec 2012 • Sivan Sabato, Shai Shalev-Shwartz, Nathan Srebro, Daniel Hsu, Tong Zhang
We consider the problem of learning a non-negative linear classifier with a $1$-norm of at most $k$, and a fixed threshold, under the hinge-loss.
no code implementations • NeurIPS 2012 • Amit Daniely, Sivan Sabato, Shai S. Shwartz
We analyze both the estimation error and the approximation error of these methods.
no code implementations • 17 Aug 2012 • Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz
Our efficient aggressive active learner of half-spaces has formal approximation guarantees that hold when the pool is separable with a margin.
no code implementations • 5 Apr 2012 • Sivan Sabato, Nathan Srebro, Naftali Tishby
We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the margin-adapted dimension, which is a simple function of the second order statistics of the data distribution, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the margin-adapted dimension of the data distribution.
no code implementations • NeurIPS 2010 • Sivan Sabato, Nathan Srebro, Naftali Tishby
We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the gamma-adapted-dimension, which is a simple function of the spectrum of a distribution's covariance matrix, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the gamma-adapted-dimension of the source distribution.