Model-based reinforcement learning under uncertainty

Abstract

In this talk I’ll present our recent paper that has just been accepted into AISTATS 2023, titled Mode-Constrained Model-Based Reinforcement Learning via Gaussian Processes. I’ll then present some of my ongoing research in the area of model-based reinforcement learning under uncertainty.

Paper abstract: Model-based reinforcement learning (MBRL) algorithms do not typically consider environments – subject to multiple dynamics modes – where it is beneficial to avoid inoperable or undesirable dynamics modes. We present a MBRL algorithm that avoids entering such inoperable or undesirable dynamics modes, by constraining the controlled system to remain in a single dynamics mode with high probability. This is a particularly difficult problem because the mode constraint is unknown a priori. We propose to jointly infer the mode constraint, along with the underlying dynamics modes. Importantly, our method infers latent structure that our planning scheme leverages to 1) enforce the mode constraint with high probability, and to 2) escape the local optima induced by the mode constraint by targeting exploration where the mode constraint’s epistemic uncertainty is high. We validate our method by showing that it can navigate a simulated quadcopter – subject to a turbulent dynamics mode – to a target state, whilst remaining in the desired dynamics mode with high probability.

Date
Feb 13, 2023 11:30 AM — 12:30 PM
Location
University of Cambridge