Search Results for author: Hector Kohler

Found 7 papers, 1 papers with code

Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning

no code implementations23 May 2024 Hector Kohler, Quentin Delfosse, Riad Akrour, Kristian Kersting, Philippe Preux

We empirically demonstrate that INTERPRETER compact tree programs match oracles across a diverse set of sequential decision tasks and evaluate the impact of our design choices on interpretability and performances.

Atari Games reinforcement-learning

PID Tuning using Cross-Entropy Deep Learning: a Lyapunov Stability Analysis

no code implementations18 Apr 2024 Hector Kohler, Benoit Clement, Thomas Chaffre, Gilles Le Chenadec

We perform this stability analysis on a LB adaptive control system whose adaptive parameters are determined using a Cross-Entropy Deep Learning method.

Interpretable Decision Tree Search as a Markov Decision Process

1 code implementation22 Sep 2023 Hector Kohler, Riad Akrour, Philippe Preux

Finding an optimal decision tree for a supervised learning task is a challenging combinatorial problem to solve at scale.

AdaStop: adaptive statistical testing for sound comparisons of Deep RL agents

no code implementations19 Jun 2023 Timothée Mathieu, Riccardo Della Vecchia, Alena Shilova, Matheus Medeiros Centa, Hector Kohler, Odalric-Ambrym Maillard, Philippe Preux

When comparing several RL algorithms, a major question is how many executions must be made and how can we ensure that the results of such a comparison are theoretically sound.

Reinforcement Learning (RL)

Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning

no code implementations11 Apr 2023 Hector Kohler, Riad Akrour, Philippe Preux

A given supervised classification task is modeled as a Markov decision problem (MDP) and then augmented with additional actions that gather information about the features, equivalent to building a DT.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.