no code implementations • 1 Nov 2023 • Ziqing Lu, Guanlin Liu, Lifeng Cai, Weiyu Xu
Finding optimal adversarial attack strategies is an important topic in reinforcement learning and the Markov decision process.
no code implementations • 15 Jul 2023 • Guanlin Liu, Zhihan Zhou, Han Liu, Lifeng Lai
Robust reinforcement learning (RL) aims to find a policy that optimizes the worst-case performance in the face of uncertainties.
no code implementations • 10 Dec 2021 • Guanlin Liu, Lifeng Lai
We show that, in both white-box and black-box settings, the proposed attack schemes can force the LinUCB agent to pull a target arm very frequently by spending only logarithm cost.
no code implementations • NeurIPS 2021 • Guanlin Liu, Lifeng Lai
In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent.
no code implementations • 19 Feb 2020 • Guanlin Liu, Lifeng Lai
To defend against this class of attacks, we introduce a novel algorithm that is robust to action-manipulation attacks when an upper bound for the total attack cost is given.