1 code implementation • 16 Oct 2023 • Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang
While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.
no code implementations • 20 Apr 2023 • Chih-Hsun Lin, Chia-Yi Hsu, Chia-Mu Yu, Yang Cao, Chun-Ying Huang
Differentially private synthetic data is a promising alternative for sensitive data release.
no code implementations • NeurIPS 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • NeurIPS 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
1 code implementation • 26 Oct 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
no code implementations • 4 Sep 2021 • Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu
The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.
no code implementations • 3 Mar 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • 2 Mar 2021 • Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu
In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
no code implementations • 23 Feb 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
no code implementations • 24 Sep 2018 • Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu
Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.