Paper

Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks

An adversarial deep learning approach is presented to launch over-the-air spectrum poisoning attacks. A transmitter applies deep learning on its spectrum sensing results to predict idle time slots for data transmission. In the meantime, an adversary learns the transmitter's behavior (exploratory attack) by building another deep neural network to predict when transmissions will succeed. The adversary falsifies (poisons) the transmitter's spectrum sensing data over the air by transmitting during the short spectrum sensing period of the transmitter. Depending on whether the transmitter uses the sensing results as test data to make transmit decisions or as training data to retrain its deep neural network, either it is fooled into making incorrect decisions (evasion attack), or the transmitter's algorithm is retrained incorrectly for future decisions (causative attack). Both attacks are energy efficient and hard to detect (stealth) compared to jamming the long data transmission period, and substantially reduce the throughput. A dynamic defense is designed for the transmitter that deliberately makes a small number of incorrect transmissions (selected by the confidence score on channel classification) to manipulate the adversary's training data. This defense effectively fools the adversary (if any) and helps the transmitter sustain its throughput with or without an adversary present.

Results in Papers With Code
(↓ scroll down to see all results)