Attacks on Perception-Based Control Systems: Modeling and Fundamental Limits

14 Jun 2022  ·  Amir Khazraei, Henry Pfister, Miroslav Pajic ·

We study the performance of perception-based control systems in the presence of attacks, and provide methods for modeling and analysis of their resiliency to stealthy attacks on both physical and perception-based sensing. Specifically, we consider a general setup with a nonlinear affine physical plant controlled with a perception-based controller that maps both the physical (e.g., IMUs) and perceptual (e.g., camera) sensing to the control input; the system is also equipped with a statistical or learning-based anomaly detector (AD). We model the attacks in the most general form, and introduce the notions of attack effectiveness and stealthiness independent of the used AD. In such setting, we consider attacks with different levels of runtime knowledge about the plant. We find sufficient conditions for existence of stealthy effective attacks that force the plant into an unsafe region without being detected by any AD. We show that as the open-loop unstable plant dynamics diverges faster and the closed-loop system converges faster to an equilibrium point, the system is more vulnerable to effective stealthy attacks. Also, depending on runtime information available to the attacker, the probability of attack remaining stealthy can be arbitrarily close to one, if the attacker's estimate of the plant's state is arbitrarily close to the true state; when an accurate estimate of the plant state is not available, the stealthiness level depends on the control performance in attack-free operation.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here