Paper

Safe Reinforcement Learning for a Robot Being Pursued but with Objectives Covering More Than Capture-avoidance

Reinforcement Learning (RL) algorithms show amazing performance in recent years, but placing RL in real-world applications such as self-driven vehicles may suffer safety problems. A self-driven vehicle moving to a target position following a learned policy may suffer a vehicle with unpredictable aggressive behaviors or even being pursued by a vehicle following a Nash strategy. To address the safety issue of the self-driven vehicle in this scenario, this paper conducts a preliminary study based on a system of robots. A safe RL framework with safety guarantees is developed for a robot being pursued but with objectives covering more than capture-avoidance. Simulations and experiments are conducted based on the system of robots to evaluate the effectiveness of the developed safe RL framework.

Results in Papers With Code
(↓ scroll down to see all results)