no code implementations • 10 May 2024 • Davide Maran, Alberto Maria Metelli, Matteo Papini, Marcello Restelli
We consider the problem of learning an $\varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
no code implementations • 9 May 2024 • Matteo Papini, Giorgio Manganini, Alberto Maria Metelli, Marcello Restelli
We provide an iterative algorithm that alternates between the cross-entropy estimation of the minimum-variance behavioral policy and the actual policy optimization, leveraging on defensive IS.
no code implementations • 3 May 2024 • Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini
After introducing a novel framework for modeling this scenario, we study the global convergence to the best deterministic policy, under (weak) gradient domination assumptions.
no code implementations • 23 Feb 2024 • Filippo Lazzati, Mirco Mutti, Alberto Maria Metelli
In this paper, we introduce a novel notion of feasible reward set capturing the opportunities and limitations of the offline setting and we analyze the complexity of its estimation.
no code implementations • 21 Feb 2024 • Alberto Maria Metelli
Configurable Markov Decision Processes (Conf-MDPs) have recently been introduced as an extension of the traditional Markov Decision Processes (MDPs) to model the real-world scenarios in which there is the possibility to intervene in the environment in order to configure some of its parameters.
no code implementations • 15 Feb 2024 • Khaled Eldowa, Nicolò Cesa-Bianchi, Alberto Maria Metelli, Marcello Restelli
For a selection of policy set families, we prove nearly-matching lower bounds, scaling similarly with the capacity.
no code implementations • 6 Feb 2024 • Davide Maran, Alberto Maria Metelli, Matteo Papini, Marcello Restell
Obtaining no-regret guarantees for reinforcement learning (RL) in the case of problems with continuous state and/or action spaces is still one of the major open challenges in the field.
no code implementations • 8 Jan 2024 • Riccardo Poiani, Gabriele Curti, Alberto Maria Metelli, Marcello Restelli
For this reason, in this work, we extend the IRL formulation to problems where, in addition to demonstrations from the optimal agent, we can observe the behavior of multiple sub-optimal experts.
1 code implementation • 20 Dec 2023 • Théo Vincent, Alberto Maria Metelli, Boris Belousov, Jan Peters, Marcello Restelli, Carlo D'Eramo
We formulate an optimization problem to learn PBO for generic sequential decision-making problems, and we theoretically analyze its properties in two representative classes of RL problems.
no code implementations • 17 Oct 2023 • Paolo Bonetti, Alberto Maria Metelli, Marcello Restelli
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures and leverages transfer entropy to estimate the causal flow of information from the features to the target in time series.
no code implementations • 4 Oct 2023 • Gianmarco Genalti, Lupo Marsigli, Nicola Gatti, Alberto Maria Metelli
In this setting, we study the regret minimization problem when $\epsilon$ and $u$ are unknown to the learner and it has to adapt.
no code implementations • 29 Aug 2023 • Riccardo Poiani, Alberto Maria Metelli, Marcello Restelli
In this setting, the agent's goal lies in sequentially choosing which mediator to query to identify with high probability the optimal arm while minimizing the identification time, i. e., the sample complexity.
no code implementations • 19 Jun 2023 • Paolo Bonetti, Alberto Maria Metelli, Marcello Restelli
A limitation of methods based on correlation is the assumption of linearity in the relationship between features and target.
no code implementations • 10 May 2023 • Gianluca Drappo, Alberto Maria Metelli, Marcello Restelli
Then, focusing on a sub-setting of HRL approaches, the options framework, we highlight how the average duration of the available options affects the planning horizon and, consequently, the regret itself.
Hierarchical Reinforcement Learning reinforcement-learning +1
no code implementations • 7 May 2023 • Riccardo Poiani, Alberto Maria Metelli, Marcello Restelli
In Reinforcement Learning (RL), an agent acts in an unknown environment to maximize the expected cumulative discounted sum of an external reward signal, i. e., the expected return.
no code implementations • 25 Apr 2023 • Alberto Maria Metelli, Filippo Lazzati, Marcello Restelli
We start by formally introducing the problem of estimating the feasible reward set, the corresponding PAC requirement, and discussing the properties of particular classes of rewards.
no code implementations • 11 Apr 2023 • Alberto Maria Metelli, Mirco Mutti, Marcello Restelli
In this paper, we present a minimax lower bound on the discounted mean estimation problem that explicitly connects the estimation error with the mixing properties of the Markov process and the discount factor.
no code implementations • 26 Mar 2023 • Paolo Bonetti, Alberto Maria Metelli, Marcello Restelli
Instead, dimensionality reduction techniques are designed to limit the number of features in a dataset by projecting them into a lower-dimensional space, possibly considering all the original features.
no code implementations • 14 Mar 2023 • Khaled Eldowa, Nicolò Cesa-Bianchi, Alberto Maria Metelli, Marcello Restelli
We investigate the problem of bandits with expert advice when the experts are fixed and known distributions over the actions.
no code implementations • 4 Mar 2023 • Amarildo Likmeta, Matteo Sacco, Alberto Maria Metelli, Marcello Restelli
Uncertainty quantification has been extensively used as a means to achieve efficient directed exploration in Reinforcement Learning (RL).
1 code implementation • 15 Feb 2023 • Marco Mussi, Alessandro Montenegro, Francesco Trovó, Marcello Restelli, Alberto Maria Metelli
Then, we prove that, with a sufficiently large budget, they provide guarantees on the probability of properly identifying the optimal option at the end of the learning process.
1 code implementation • 12 Dec 2022 • Francesco Bacchiocchi, Gianmarco Genalti, Davide Maran, Marco Mussi, Marcello Restelli, Nicola Gatti, Alberto Maria Metelli
Autoregressive processes naturally arise in a large variety of real-world scenarios, including stock markets, sales forecasting, weather prediction, advertising, and pricing.
1 code implementation • 7 Dec 2022 • Alberto Maria Metelli, Francesco Trovò, Matteo Pirola, Marcello Restelli
This paper is in the field of stochastic Multi-Armed Bandits (MABs), i. e., those sequential selection techniques able to learn online using only the feedback given by the chosen option (a. k. a.
no code implementations • 7 Dec 2022 • Davide Maran, Alberto Maria Metelli, Marcello Restelli
In this paper, we study BC with the goal of providing theoretical guarantees on the performance of the imitator policy in the case of continuous actions.
no code implementations • 21 Nov 2022 • Luca Sabbioni, Luca Al Daire, Lorenzo Bisi, Alberto Maria Metelli, Marcello Restelli
In reinforcement learning, the performance of learning agents is highly sensitive to the choice of time discretization.
1 code implementation • 16 Nov 2022 • Marco Mussi, Alberto Maria Metelli, Marcello Restelli
Then, the hidden state evolves according to linear dynamics, affected by the performed action too.
no code implementations • 25 Jul 2022 • Riccardo Poiani, Ciprian Stirbu, Alberto Maria Metelli, Marcello Restelli
With the continuous growth of the global economy and markets, resource imbalance has risen to be one of the central issues in real logistic scenarios.
1 code implementation • 8 Jul 2022 • Julen Cestero, Marco Quartulli, Alberto Maria Metelli, Marcello Restelli
Warehouse Management Systems have been evolving and improving thanks to new Data Intelligence techniques.
1 code implementation • 20 May 2022 • Marco Mussi, Davide Lombarda, Alberto Maria Metelli, Francesco Trovò, Marcello Restelli
In this work, we propose a general and flexible framework, namely ARLO: Automated Reinforcement Learning Optimizer, to construct automated pipelines for AutoRL.
no code implementations • 13 Dec 2021 • Pierre Liotet, Francesco Vidaich, Alberto Maria Metelli, Marcello Restelli
This hyper-policy is trained to maximize the estimated future performance, efficiently reusing past data by means of importance sampling, at the cost of introducing a controlled bias.
no code implementations • NeurIPS 2021 • Giorgia Ramponi, Alberto Maria Metelli, Alessandro Concetti, Marcello Restelli
This presupposes that the two actors have the same reward functions.
1 code implementation • NeurIPS 2021 • Alberto Maria Metelli, Alessio Russo, Marcello Restelli
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimation and learning algorithms.
no code implementations • 29 Sep 2021 • Alberto Maria Metelli, Samuele Meta, Marcello Restelli
In this setting, Importance Sampling (IS) is typically employed as a what-if analysis tool, with the goal of estimating the performance of a target policy, given samples collected with a different behavioral policy.
no code implementations • 15 Dec 2020 • Alberto Maria Metelli, Matteo Papini, Pierluca D'Oro, Marcello Restelli
In this paper, we introduce the notion of mediator feedback that frames PO as an online learning problem over the policy space.
1 code implementation • ICML 2020 • Alberto Maria Metelli, Flavio Mazzolini, Lorenzo Bisi, Luca Sabbioni, Marcello Restelli
The choice of the control frequency of a system has a relevant impact on the ability of reinforcement learning algorithms to learn a highly performing policy.
1 code implementation • NeurIPS 2019 • Alberto Maria Metelli, Amarildo Likmeta, Marcello Restelli
How does the uncertainty of the value function propagate when performing temporal difference learning?
no code implementations • 9 Sep 2019 • Alberto Maria Metelli, Guglielmo Manneschi, Marcello Restelli
We study the problem of identifying the policy space of a learning agent, having access to a set of demonstrations generated by its optimal policy.
no code implementations • 9 Sep 2019 • Pierluca D'Oro, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, Marcello Restelli
In this paper, we introduce a novel model-based policy search approach that exploits the knowledge of the current agent policy to learn an approximate transition model, focusing on the portions of the environment that are most relevant for policy improvement.
1 code implementation • 17 Jul 2019 • Mario Beraha, Alberto Maria Metelli, Matteo Papini, Andrea Tirinzoni, Marcello Restelli
Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables.
2 code implementations • NeurIPS 2018 • Alberto Maria Metelli, Matteo Papini, Francesco Faccio, Marcello Restelli
Policy optimization is an effective reinforcement learning approach to solve continuous control tasks.
no code implementations • ICML 2018 • Alberto Maria Metelli, Mirco Mutti, Marcello Restelli
After having introduced our approach and derived some theoretical results, we present the experimental evaluation in two explicative problems to show the benefits of the environment configurability on the performance of the learned policy.
no code implementations • NeurIPS 2017 • Alberto Maria Metelli, Matteo Pirotta, Marcello Restelli
Within this subspace, using a second-order criterion, we search for the reward function that penalizes the most a deviation from the expert's policy.