no code implementations • 14 Sep 2020 • Mehmet Emre Gursoy, Vivekanand Rajasekar, Ling Liu
Given a real trace dataset D, the differential privacy parameter epsilon controlling the strength of privacy protection, and the utility/error metric Err of interest; OptaTrace uses Bayesian optimization to optimize DPLTS such that the output error (measured in terms of given metric Err) is minimized while epsilon-differential privacy is satisfied.
2 code implementations • 16 Jul 2020 • Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.
1 code implementation • 11 Jul 2020 • Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
no code implementations • 5 Jun 2020 • Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei
However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable.
2 code implementations • 22 Apr 2020 • Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu
FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server.
2 code implementations • 9 Apr 2020 • Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu
The rapid growth of real-time huge data capturing has pushed the deep learning and data analytic computing to the edge systems.
no code implementations • 21 Nov 2019 • Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, Lei Yu
Second, through MPLens, we highlight how the vulnerability of pre-trained models under membership inference attack is not uniform across all classes, particularly when the training data itself is skewed.
no code implementations • 15 May 2019 • Mehmet Emre Gursoy, Acar Tamersoy, Stacey Truex, Wenqi Wei, Ling Liu
In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy.
Cryptography and Security Databases
no code implementations • 3 Apr 2019 • Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, Stacey Truex
However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage.
no code implementations • 29 Jun 2018 • Wenqi Wei, Ling Liu, Margaret Loper, Stacey Truex, Lei Yu, Mehmet Emre Gursoy, Yanzhao Wu
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data.
1 code implementation • 28 Jun 2018 • Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, Wenqi Wei
Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning in federated systems exposes vulnerabilities to membership inference risks when the adversary is a participant in the federation.
Cryptography and Security