Search Results for author: Ezgi Korkmaz

Found 14 papers, 0 papers with code

A Survey Analyzing Generalization in Deep Reinforcement Learning

no code implementations4 Jan 2024 Ezgi Korkmaz

Reinforcement learning research obtained significant success and attention with the utilization of deep neural networks to solve problems in high dimensional state or action spaces.

reinforcement-learning

Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions

no code implementations9 Jun 2023 Ezgi Korkmaz, Jonah Brown-Cohen

Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design.

Adversarial Attack Atari Games +1

Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness

no code implementations17 Jan 2023 Ezgi Korkmaz

Learning from raw high dimensional data via interaction with a given environment has been effectively achieved through the utilization of deep neural networks.

reinforcement-learning Reinforcement Learning (RL)

Deep Reinforcement Learning Policies Learn Shared Adversarial Features Across MDPs

no code implementations16 Dec 2021 Ezgi Korkmaz

We argue that these high sensitivity directions support the hypothesis that non-robust features are shared across training environments of reinforcement learning agents.

Atari Games reinforcement-learning +1

Assessing Deep Reinforcement Learning Policies via Natural Corruptions at the Edge of Imperceptibility

no code implementations29 Sep 2021 Ezgi Korkmaz

We demonstrate that the perceptual similarity distance of the minimal natural perturbations is orders of magnitude smaller than the perceptual similarity distance of the adversarial perturbations to the unperturbed observations (i. e. minimal natural perturbations are perceptually more similar to the unperturbed states than the adversarial perturbations), while causing larger degradation in the policy performance.

reinforcement-learning Reinforcement Learning (RL)

Training with Worst-Case Distributional Shift causes Overestimation and Inaccuracies in State-Action Value Functions

no code implementations29 Sep 2021 Ezgi Korkmaz

The utilization of deep neural networks as function approximators for the state-action value function created a new research area for self learning systems, and made it possible to learn optimal policies from high dimensional state representations.

Atari Games Self-Learning

Detecting Worst-case Corruptions via Loss Landscape Curvature in Deep Reinforcement Learning

no code implementations29 Sep 2021 Ezgi Korkmaz, Jonah Brown-Cohen

The non-robustness of neural network policies to adversarial examples poses a challenge for deep reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Adversarial Training Blocks Generalization in Neural Policies

no code implementations NeurIPS Workshop ICBINB 2021 Ezgi Korkmaz

Deep neural networks have made it possible for reinforcement learning algorithms to learn from raw high dimensional inputs.

reinforcement-learning Reinforcement Learning (RL)

Investigating Vulnerabilities of Deep Neural Policies

no code implementations30 Aug 2021 Ezgi Korkmaz

For the second approach, we propose a novel method to measure the feature sensitivities of deep neural policies and we compare these feature sensitivity differences in state-of-the-art adversarially trained deep neural policies and vanilla trained deep neural policies.

reinforcement-learning Reinforcement Learning (RL)

Non-Robust Feature Mapping in Deep Reinforcement Learning

no code implementations ICML Workshop AML 2021 Ezgi Korkmaz

We conduct several experiments in the Arcade Learning Environment (ALE), and with our proposed feature mapping algorithms we show that while the state-of-the-art adversarial training method eliminates a certain set of non-robust features, a new set of non-robust features more intrinsic to the adversarial training are created.

Atari Games reinforcement-learning +1

Adversarially Trained Neural Policies in the Fourier Domain

no code implementations ICML Workshop AML 2021 Ezgi Korkmaz

Reinforcement learning policies based on deep neural networks are vulnerable to imperceptible adversarial perturbations to their inputs, in much the same way as neural network image classifiers.

reinforcement-learning Reinforcement Learning (RL)

Exploring Transferability of Perturbations in Deep Reinforcement Learning

no code implementations1 Jan 2021 Ezgi Korkmaz

In this paper we propose a more realistic threat model in which the adversary computes the perturbation only once based on a single state.

reinforcement-learning Reinforcement Learning (RL)

Daylight: Assessing Generalization Skills of Deep Reinforcement Learning Agents

no code implementations1 Jan 2021 Ezgi Korkmaz

Deep reinforcement learning algorithms have recently achieved significant success in learning high-performing policies from purely visual observations.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.