Search Results for author: Ju-Seung Byun

Found 5 papers, 2 papers with code

Symmetric Reinforcement Learning Loss for Robust Learning on Diverse Tasks and Model Scales

1 code implementation27 May 2024 Ju-Seung Byun, Andrew Perrault

To enhance training robustness, RL has adopted techniques from supervised learning, such as ensembles and layer normalization.

Atari Games reinforcement-learning +1

Reinforcement Learning for Fine-tuning Text-to-speech Diffusion Models

no code implementations23 May 2024 Jingyi Chen, Ju-Seung Byun, Micha Elsner, Andrew Perrault

Recent advancements in generative models have sparked significant interest within the machine learning community.

Image Generation reinforcement-learning +2

Normality-Guided Distributional Reinforcement Learning for Continuous Control

no code implementations28 Aug 2022 Ju-Seung Byun, Andrew Perrault

Distributional reinforcement learning (DRL) has been shown to improve performance by modeling the value distribution, not just the mean.

Continuous Control Distributional Reinforcement Learning +2

Training Transition Policies via Distribution Matching for Complex Tasks

1 code implementation ICLR 2022 Ju-Seung Byun, Andrew Perrault

We introduce transition policies that smoothly connect lower-level policies by producing a distribution of states and actions that matches what is expected by the next policy.

Hierarchical Reinforcement Learning Q-Learning +2

Proximal Policy Gradient: PPO with Policy Gradient

no code implementations20 Oct 2020 Ju-Seung Byun, Byungmoon Kim, Huamin Wang

In this paper, we propose a new algorithm PPG (Proximal Policy Gradient), which is close to both VPG (vanilla policy gradient) and PPO (proximal policy optimization).

OpenAI Gym

Cannot find the paper you are looking for? You can Submit a new open access paper.