Search Results for author: Seungjae Lee

Found 14 papers, 6 papers with code

Behavior Generation with Latent Actions

1 code implementation5 Mar 2024 Seungjae Lee, Yibin Wang, Haritheja Etukuru, H. Jin Kim, Nur Muhammad Mahi Shafiullah, Lerrel Pinto

Unlike language or image generation, decision making requires modeling actions - continuous-valued vectors that are multimodal in their distribution, potentially drawn from uncurated sources, where generation errors can compound in sequential prediction.

Autonomous Driving Decision Making +2

Diversify & Conquer: Outcome-directed Curriculum RL via Out-of-Distribution Disagreement

no code implementations30 Oct 2023 Daesol Cho, Seungjae Lee, H. Jin Kim

Reinforcement learning (RL) often faces the challenges of uninformed search problems where the agent should explore without access to the domain knowledge such as characteristics of the environment or external rewards.

Reinforcement Learning (RL)

Detection of Pedestrian Turning Motions to Enhance Indoor Map Matching Performance

no code implementations4 Sep 2023 Seunghyeon Park, Taewon Kang, Seungjae Lee, Joon Hyo Rhee

In summary, our research contributes to the development of a more accurate and reliable pedestrian navigation system by leveraging smartphone IMU data and advanced algorithms for turn detection in indoor environments.

Outcome-directed Reinforcement Learning by Uncertainty & Temporal Distance-Aware Curriculum Goal Generation

1 code implementation27 Jan 2023 Daesol Cho, Seungjae Lee, H. Jin Kim

Current reinforcement learning (RL) often suffers when solving a challenging exploration problem where the desired outcomes or high rewards are rarely observed.

reinforcement-learning Reinforcement Learning (RL)

SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning

no code implementations27 Jan 2023 Dongseok Shim, Seungjae Lee, H. Jin Kim

As previous representations for reinforcement learning cannot effectively incorporate a human-intuitive understanding of the 3D environment, they usually suffer from sub-optimal performances.

3D Reconstruction Novel View Synthesis +2

DHRL: A Graph-Based Approach for Long-Horizon and Sparse Hierarchical Reinforcement Learning

1 code implementation11 Oct 2022 Seungjae Lee, Jigang Kim, Inkyu Jang, H. Jin Kim

Hierarchical Reinforcement Learning (HRL) has made notable progress in complex control tasks by leveraging temporal abstraction.

Hierarchical Reinforcement Learning reinforcement-learning +1

Patchwork++: Fast and Robust Ground Segmentation Solving Partial Under-Segmentation Using 3D Point Cloud

2 code implementations25 Jul 2022 Seungjae Lee, Hyungtae Lim, Hyun Myung

Moreover, even if the parameters are well adjusted, a partial under-segmentation problem can still emerge, which implies ground segmentation failures in some regions.

Object Recognition Segmentation

PaGO-LOAM: Robust Ground-Optimized LiDAR Odometry

1 code implementation1 Jun 2022 Dong-Uk Seo, Hyungtae Lim, Seungjae Lee, Hyun Myung

In this paper, a robust ground-optimized LiDAR odometry framework is proposed to facilitate the study to check the effect of ground segmentation on LiDAR SLAM based on the state-of-the-art (SOTA) method.

Segmentation

High-contrast, speckle-free, true 3D holography via binary CGH optimization

no code implementations7 Jan 2022 Byounghyo Lee, Dongyeon Kim, Seungjae Lee, Chun Chen, Byoungho Lee

Here, we propose the practical solution to realize speckle-free, high-contrast, true 3D holography by combining random-phase, temporal multiplexing, binary holography, and binary optimization.

3D Holography Quantization +1

Simulation Studies on Deep Reinforcement Learning for Building Control with Human Interaction

no code implementations14 Mar 2021 Donghwan Lee, Niao He, Seungjae Lee, Panagiota Karava, Jianghai Hu

The building sector consumes the largest energy in the world, and there have been considerable research interests in energy consumption and comfort management of buildings.

Management reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.