no code implementations • 24 Jan 2024 • Jiameng Pu, Zafar Takhirov
This report summarizes all the MIA experiments (Membership Inference Attacks) of the Embedding Attack Project, including threat models, experimental setup, experimental results, findings and discussion.
1 code implementation • 17 Oct 2022 • Jiameng Pu, Zain Sarwar, Sifat Muhammad Abdullah, Abdullah Rehman, Yoonjin Kim, Parantapa Bhattacharya, Mobin Javed, Bimal Viswanath
Several defenses have been proposed for deepfake text detection.
no code implementations • 5 Apr 2021 • Neal Mangaokar, Jiameng Pu, Parantapa Bhattacharya, Chandan K. Reddy, Bimal Viswanath
The potential for fraudulent claims based on such generated 'fake' medical images is significant, and we demonstrate successful attacks on both X-rays and retinal fundus image modalities.
1 code implementation • 7 Mar 2021 • Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Mangaokar, Jiameng Pu, Mobin Javed, Chandan K. Reddy, Bimal Viswanath
T-Miner employs a sequence-to-sequence (seq-2-seq) generative model that probes the suspicious classifier and learns to produce text sequences that are likely to contain the Trojan trigger.
1 code implementation • 7 Mar 2021 • Jiameng Pu, Neal Mangaokar, Lauren Kelly, Parantapa Bhattacharya, Kavya Sundaram, Mobin Javed, Bolun Wang, Bimal Viswanath
AI-manipulated videos, commonly known as deepfakes, are an emerging problem.
no code implementations • 17 May 2020 • Steve T.K. Jan, Qingying Hao, Tianrui Hu, Jiameng Pu, Sonal Oswal, Gang Wang, Bimal Viswanath
We evaluate this idea and show our method can train a model that outperforms existing methods with only 1% of the labeled data.