no code implementations • 14 Oct 2020 • Harsh Bimal Desai, Mustafa Safa Ozdayi, Murat Kantarcioglu
It has been shown that an attacker can inject backdoors to the trained model during FL, and then can leverage the backdoor to make the model misclassify later.