1 code implementation • 11 Dec 2023 • Kristian Georgiev, Joshua Vendrow, Hadi Salman, Sung Min Park, Aleksander Madry
Then, we provide a method for computing these attributions efficiently.
1 code implementation • 6 Nov 2023 • Hadi Salman, Caleb Parks, Matthew Swan, John Gauch
To circumvent this issue, FcaNet experimented on ImageNet to find optimal frequencies.
no code implementations • 19 Jul 2023 • Alaa Khaddaj, Guillaume Leclerc, Aleksandar Makelov, Kristian Georgiev, Hadi Salman, Andrew Ilyas, Aleksander Madry
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
2 code implementations • CVPR 2023 • Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry
For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine.
1 code implementation • 13 Feb 2023 • Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, Aleksander Madry
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models.
1 code implementation • 4 Nov 2022 • Hadi Salman, Caleb Parks, Shi Yin Hong, Justin Zhan
Next, we test wavelet transform as a standalone channel compression method.
1 code implementation • CVPR 2023 • Saachi Jain, Hadi Salman, Alaa Khaddaj, Eric Wong, Sung Min Park, Aleksander Madry
It is commonly believed that in transfer learning including more pre-training data translates into better performance.
1 code implementation • 6 Jul 2022 • Hadi Salman, Saachi Jain, Andrew Ilyas, Logan Engstrom, Eric Wong, Aleksander Madry
Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside.
1 code implementation • ICLR 2022 • Saachi Jain, Hadi Salman, Eric Wong, Pengchuan Zhang, Vibhav Vineet, Sai Vemprala, Aleksander Madry
Missingness, or the absence of features from an input, is a concept fundamental to many model debugging tools.
no code implementations • 16 Apr 2022 • Ben Lowe, Hadi Salman, Justin Zhan
All single-level wavelets report similar results indicating that the convolutional neural network is invariant to choice of wavelet in a single-level filter approach.
1 code implementation • CVPR 2022 • Hadi Salman, Saachi Jain, Eric Wong, Aleksander Mądry
Certified patch defenses can guarantee robustness of an image classifier to arbitrary changes within a bounded contiguous region.
no code implementations • 25 Jun 2021 • Daniel McDuff, Yale Song, Jiyoung Lee, Vibhav Vineet, Sai Vemprala, Nicholas Gyde, Hadi Salman, Shuang Ma, Kwanghoon Sohn, Ashish Kapoor
The ability to perform causal and counterfactual reasoning are central properties of human intelligence.
1 code implementation • 7 Jun 2021 • Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry
We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation.
2 code implementations • NeurIPS 2021 • Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor
We study a class of realistic computer vision settings wherein one can influence the design of the objects being recognized.
2 code implementations • NeurIPS 2020 • Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry
Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance.
1 code implementation • 26 Apr 2020 • Edward J. Hu, Adith Swaminathan, Hadi Salman, Greg Yang
Robustness against image perturbations bounded by a $\ell_p$ ball have been well-studied in recent literature.
4 code implementations • NeurIPS 2020 • Hadi Salman, Ming-Jie Sun, Greg Yang, Ashish Kapoor, J. Zico Kolter
We present a method for provably defending any pretrained image classifier against $\ell_p$ adversarial attacks.
1 code implementation • ICML 2020 • Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya Razenshteyn, Jerry Li
Randomized smoothing is the current state-of-the-art defense with provable robustness against $\ell_2$ adversarial attacks.
1 code implementation • 24 Jul 2019 • Greg Yang, Hadi Salman
Are neural networks biased toward simple functions?
3 code implementations • NeurIPS 2019 • Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, huan zhang, Ilya Razenshteyn, Sebastien Bubeck
In this paper, we employ adversarial training to improve the performance of randomized smoothing.
3 code implementations • NeurIPS 2019 • Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang
This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.
no code implementations • 8 Oct 2018 • Hadi Salman, Payman Yadollahpour, Tom Fletcher, Kayhan Batmanghelich
We use a neural network to parametrize the smooth vector field and a recursive neural network (RNN) for approximating the solution of the ODE.