Joint Representations for Reinforcement Learning with Multiple Sensors

10 Feb 2023  ·  Philipp Becker, Sebastian Markgraf, Fabian Otto, Gerhard Neumann ·

Combining inputs from multiple sensor modalities effectively in reinforcement learning (RL) is an open problem. While many self-supervised representation learning approaches exist to improve performance and sample complexity for image-based RL, they usually neglect other available information, such as robot proprioception. However, using this proprioception for representation learning can help algorithms to focus on relevant aspects and guide them toward finding better representations. In this work, we systematically analyze representation learning for RL from multiple sensors by building on Recurrent State Space Models. We propose a combination of reconstruction-based and contrastive losses, which allows us to choose the most appropriate method for each sensor modality. We demonstrate the benefits of joint representations, particularly with distinct loss functions for each modality, for model-free and model-based RL on complex tasks. Those include tasks where the images contain distractions or occlusions and a new locomotion suite. We show that combining reconstruction-based and contrastive losses for joint representation learning improves performance significantly compared to a post hoc combination of image representations and proprioception and can also improve the quality of learned models for model-based RL.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods