no code implementations • ICML 2020 • Yuh-Shyang Wang, Tsui-Wei Weng, Luca Daniel
In this paper, we show how to combine recent works on static neural network certification tools with robust control theory to certify a neural network policy in a control loop.
1 code implementation • 16 Dec 2023 • Wang Zhang, Ziwen Ma, Subhro Das, Tsui-Wei Weng, Alexandre Megretski, Luca Daniel, Lam M. Nguyen
Neural networks are powerful tools in various applications, and quantifying their uncertainty is crucial for reliable decision-making.
no code implementations • 29 Oct 2023 • Zhenggqi Gao, Dinghuai Zhang, Luca Daniel, Duane S. Boning
Next, it estimates the rare event probability by utilizing importance sampling in conjunction with the last proposal.
no code implementations • 23 Feb 2023 • Xinling Yu, José E. C. Serrallés, Ilias I. Giannakopoulos, Ziyue Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang
PIFON-EPT is the first method that can simultaneously reconstruct EP and transmit fields from incomplete noisy MR measurements, providing new opportunities for EPT research.
1 code implementation • 11 Feb 2023 • Wang Zhang, Tsui-Wei Weng, Subhro Das, Alexandre Megretski, Luca Daniel, Lam M. Nguyen
Deep neural networks (DNN) have shown great capacity of modeling a dynamical system; nevertheless, they usually do not obey physics constraints such as conservation laws.
no code implementations • 26 Jan 2023 • Alex Gu, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
Interpreting machine learning models is challenging but crucial for ensuring the safety of deep networks in autonomous driving systems.
no code implementations • 23 Oct 2022 • Xinling Yu, José E. C. Serrallés, Ilias I. Giannakopoulos, Ziyue Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang
Electrical properties (EP), namely permittivity and electric conductivity, dictate the interactions between electromagnetic waves and biological tissue.
no code implementations • 6 Oct 2022 • Ching-Yun Ko, Pin-Yu Chen, Jeet Mohapatra, Payel Das, Luca Daniel
Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality.
1 code implementation • 22 Jul 2022 • Zhengqi Gao, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel, Duane S. Boning
Data lies at the core of modern deep learning.
no code implementations • 8 Dec 2021 • Ching-Yun Ko, Jeet Mohapatra, Sijia Liu, Pin-Yu Chen, Luca Daniel, Lily Weng
With the integrated framework, we achieve up to 6\% improvement on the standard accuracy and 17\% improvement on the robust accuracy.
no code implementations • ICLR 2022 • Asaf Gendler, Tsui-Wei Weng, Luca Daniel, Yaniv Romano
By combining conformal prediction with randomized smoothing, our proposed method forms a prediction set with finite-sample coverage guarantee that holds for any data distribution with $\ell_2$-norm bounded adversarial noise, generated by any adversarial attack algorithm.
no code implementations • 29 Sep 2021 • Wang Zhang, Lam M. Nguyen, Subhro Das, Pin-Yu Chen, Sijia Liu, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation.
no code implementations • 29 Sep 2021 • Victor Rong, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training.
no code implementations • 1 Feb 2021 • Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel
Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.
no code implementations • 25 Nov 2020 • Tommaso Bradde, Samuel Chevalier, Marco De Stefano, Stefano Grivet-Talocia, Luca Daniel
This paper develops a predictive modeling algorithm, denoted as Real-Time Vector Fitting (RTVF), which is capable of approximating the real-time linearized dynamics of multi-input multi-output (MIMO) dynamical systems via rational transfer function matrices.
no code implementations • 13 Nov 2020 • Samuel Chevalier, Federico Martin Ibanez, Kathleen Cavanagh, Konstantin Turitsyn, Luca Daniel, Petr Vorobev
DC microgrids are prone to small-signal instabilities due to the presence of tightly regulated loads.
no code implementations • 28 Oct 2020 • Samuel Chevalier, Luca Schenato, Luca Daniel
This subspace is used to construct and update a reduced order model (ROM) of the full nonlinear system, resulting in a highly efficient simulation for future voltage profiles.
no code implementations • NeurIPS 2020 • Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
We also provide a framework that generalizes the calculation for certification using higher-order information.
2 code implementations • NeurIPS 2021 • Tuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
To address this issue, we propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against $l_p$-norm bounded adversarial attacks.
1 code implementation • ICML 2020 • Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.
no code implementations • 2 Mar 2020 • Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel
The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.
1 code implementation • 19 Dec 2019 • Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.
1 code implementation • 2 Dec 2019 • Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel
We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints.
no code implementations • 25 Sep 2019 • Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.
no code implementations • 25 Sep 2019 • Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel
We propose that many common certified defenses can be viewed under a unified framework of regularization.
no code implementations • 18 Aug 2019 • Yuh-Shyang Wang, Tsui-Wei Weng, Luca Daniel
In this paper, we show how to combine recent works on neural network certification tools (which are mainly used in static settings such as image classification) with robust control theory to certify a neural network policy in a control loop.
2 code implementations • 17 May 2019 • Ching-Yun Ko, Zhaoyang Lyu, Tsui-Wei Weng, Luca Daniel, Ngai Wong, Dahua Lin
The vulnerability to adversarial attacks has been a critical issue for deep neural networks.
no code implementations • 18 Dec 2018 • Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Luca Daniel
With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research.
2 code implementations • 29 Nov 2018 • Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.
14 code implementations • NeurIPS 2018 • Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel
Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.
1 code implementation • 19 Oct 2018 • Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel
We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.
6 code implementations • ICML 2018 • Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel
Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].
1 code implementation • ICLR 2018 • Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.