no code implementations • 6 Mar 2024 • Anna P. Meyer, Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Our empirical evaluation demonstrates that VeriTraCER generates CEs that (1) are verifiably robust to small model updates and (2) display competitive robustness to state-of-the-art approaches in handling empirical model updates including random initialization, leave-one-out, and distribution shifts.
1 code implementation • 20 Apr 2023 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
We introduce dataset multiplicity, a way to study how inaccuracies, uncertainty, and social bias in training datasets impact test-time predictions.
no code implementations • 27 Jan 2023 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model.
no code implementations • 7 Jun 2022 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
Datasets typically contain inaccuracies due to human error and societal biases, and these inaccuracies can affect the outcomes of models trained on such datasets.
1 code implementation • 26 May 2022 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model.
no code implementations • NeurIPS 2021 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point.
1 code implementation • EMNLP 2021 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Deep neural networks for natural language processing are fragile in the face of adversarial examples -- small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction.
1 code implementation • ICML 2020 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
We then present an approach to adversarially training models that are robust to such user-defined string transformations.
no code implementations • 2 Dec 2019 • Samuel Drews, Aws Albarghouthi, Loris D'Antoni
Machine learning models are brittle, and small changes in the training data can result in different predictions.
no code implementations • 17 Feb 2017 • Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori
With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we aggressively investigate whether programs are biased.
no code implementations • 19 Oct 2016 • Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori
We explore the following question: Is a decision-making program fair, for some useful definition of fairness?
no code implementations • 31 Aug 2016 • Reudismam Rolim, Gustavo Soares, Loris D'Antoni, Oleksandr Polozov, Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, Bjoern Hartmann
In the second domain, we use repetitive edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code.