Search Results for author: Hao-Lun Hsu

Found 5 papers, 1 papers with code

ε-Neural Thompson Sampling of Deep Brain Stimulation for Parkinson Disease Treatment

no code implementations11 Mar 2024 Hao-Lun Hsu, Qitong Gao, Miroslav Pajic

Traditional commercial DBS devices are only able to deliver fixed-frequency periodic pulses to the basal ganglia (BG) regions of the brain, i. e., continuous DBS (cDBS).

Multi-Armed Bandits Reinforcement Learning (RL) +1

Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling on Sparse Hypergraphs

1 code implementation24 Dec 2023 Tianyuan Jin, Hao-Lun Hsu, William Chang, Pan Xu

Specifically, we assume there is a local reward for each hyperedge, and the reward of the joint arm is the sum of these local rewards.

Computational Efficiency Thompson Sampling

Robust Reinforcement Learning through Efficient Adversarial Herding

no code implementations12 Jun 2023 Juncheng Dong, Hao-Lun Hsu, Qitong Gao, Vahid Tarokh, Miroslav Pajic

In this work, we extend the two-player game by introducing an adversarial herd, which involves a group of adversaries, in order to address ($\textit{i}$) the difficulty of the inner optimization problem, and ($\textit{ii}$) the potential over pessimism caused by the selection of a candidate adversary set that may include unlikely scenarios.

reinforcement-learning Reinforcement Learning (RL)

Improving Safety in Deep Reinforcement Learning using Unsupervised Action Planning

no code implementations29 Sep 2021 Hao-Lun Hsu, Qiuhua Huang, Sehoon Ha

One of the key challenges to deep reinforcement learning (deep RL) is to ensure safety at both training and testing phases.

Continuous Control reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.