no code implementations • 27 Feb 2024 • Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen
In the realm of subject-driven text-to-image (T2I) generative models, recent developments like DreamBooth and BLIP-Diffusion have led to impressive results yet encounter limitations due to their intensive fine-tuning demands and substantial parameter requirements.
no code implementations • 28 Nov 2023 • Ming-Yu Chung, Sheng-Yen Chou, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo, Tsung-Yi Ho
Dataset distillation offers a potential means to enhance data efficiency in deep learning.
1 code implementation • 16 Oct 2023 • Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang
While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.
1 code implementation • 12 Sep 2023 • Xilong Wang, Chia-Mu Yu, Pin-Yu Chen
For machine learning with tabular data, Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy.
no code implementations • 20 Apr 2023 • Chih-Hsun Lin, Chia-Yi Hsu, Chia-Mu Yu, Yang Cao, Chun-Ying Huang
Differentially private synthetic data is a promising alternative for sensitive data release.
1 code implementation • ICCV 2023 • Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen
Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.
no code implementations • 2 Nov 2022 • Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo
Recently, quantum classifiers have been found to be vulnerable to adversarial attacks, in which quantum classifiers are deceived by imperceptible noises, leading to misclassification.
no code implementations • CVPR 2022 • Jia-Wei Chen, Chia-Mu Yu, Ching-Chia Kao, Tzai-Wei Pang, Chun-Shien Lu
Despite an increased demand for valuable data, the privacy concerns associated with sensitive datasets present a barrier to data sharing.
no code implementations • NeurIPS 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • NeurIPS 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
no code implementations • AAAI Workshop AdvML 2022 • Chia-Hung Yuan, Pin-Yu Chen, Chia-Mu Yu
A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack.
1 code implementation • 26 Oct 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
no code implementations • 4 Sep 2021 • Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu
The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.
1 code implementation • CVPR 2021 • Jia-Wei Chen, Li-Ju Chen, Chia-Mu Yu, Chun-Shien Lu
However, the sensitive information in the datasets discourages data owners from releasing these datasets.
no code implementations • 3 Mar 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • 2 Mar 2021 • Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu
In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
no code implementations • 23 Feb 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
no code implementations • 21 Dec 2019 • Chia-Mu Yu, Ching-Tang Chang, Yen-Wu Ti
Deepfake can result in an erosion of public trust in digital images and videos, which has far-reaching effects on political and social stability.
no code implementations • 27 May 2019 • Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma
Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.
no code implementations • 24 Sep 2018 • Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu
Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.
1 code implementation • 14 Apr 2018 • Pei-Hsuan Lu, Pin-Yu Chen, Kang-Cheng Chen, Chia-Mu Yu
In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security.
1 code implementation • 26 Mar 2018 • Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu
Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations.