AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against White-Box Models

4 Feb 2023  ·  Abdullah Caglar Oksuz, Anisa Halimi, Erman Ayday ·

Explainable Artificial Intelligence (XAI) encompasses a range of techniques and procedures aimed at elucidating the decision-making processes of AI models. While XAI is valuable in understanding the reasoning behind AI models, the data used for such revelations poses potential security and privacy vulnerabilities. Existing literature has identified privacy risks targeting machine learning models, including membership inference, model inversion, and model extraction attacks. Depending on the settings and parties involved, such attacks may target either the model itself or the training data used to create the model. We have identified that tools providing XAI can particularly increase the vulnerability of model extraction attacks, which can be a significant issue when the owner of an AI model prefers to provide only black-box access rather than sharing the model parameters and architecture with other parties. To explore this privacy risk, we propose AUTOLYCUS, a model extraction attack that leverages the explanations provided by popular explainable AI tools. We particularly focus on white-box machine learning (ML) models such as decision trees and logistic regression models. We have evaluated the performance of AUTOLYCUS on 5 machine learning datasets, in terms of the surrogate model's accuracy and its similarity to the target model. We observe that the proposed attack is highly effective; it requires up to 60x fewer queries to the target model compared to the state-of-the-art attack, while providing comparable accuracy and similarity. We first validate the performance of the proposed algorithm on decision trees, and then show its performance on logistic regression models as an indicator that the proposed algorithm performs well on white-box ML models in general. Finally, we show that the existing countermeasures remain ineffective for the proposed attack.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods