Aidan Scannell
Aidan Scannell
Home
Publications
Talks
Posts
Projects
CV
Notes
machine-learning
Nordic AI Meet & AI Day: Sample-efficient Reinforcement Learning with Implicitly Quantized Representations
Oct 21, 2024 1:00 PM — Feb 13, 2023 12:30 PM
Helsinki, Finland
Aidan Scannell
PDF
Slides
iQRL: Implicitly Quantized Representations for Sample-Efficient Reinforcement Learning
I will be presenting our research on self-supervised representation learning for reinforcement learning at the
International Workshop of Intelligent Autonomous Learning Systems 2024
.
Jul 23, 2024 2:50 PM — 3:05 PM
Darmstädter Haus and Sporthotel Walliser, Kleinwalsertal, Austria
Aidan Scannell
PDF
Poster
Slides
Model-Based Reinforcement Learning
I’ll be giving a lecture on model-based RL at the Cambridge Ellis Unit Summer School on Probabilistic Machine Learning 2024.
Jul 17, 2024 11:30 AM — 1:00 PM
University of Cambridge
Aidan Scannell
PDF
Slides
Function-space Parameterization of Neural Networks for Sequential Learning
Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining …
Aidan Scannell
,
Riccardo Mereu
,
Paul Chang
,
Ella Tamir
,
Joni Pajarinen
,
Arno Solin
PDF
Cite
Code
Poster
Slides
Website
Implicitly Quantized Representations for Reinforcement Learning
Learning representations for reinforcement learning (RL) has shown much promise for continuous control. In this project, we investigate using vector quantization to prevent representation collapse when learning representations for RL using a self-supervised latent-state consistency loss.
Aidan Scannell
,
Kalle Kujanpää
,
Yi Zhao
,
Mohammadreza Nakhaei
,
Arno Solin
,
Joni Pajarinen
PDF
Code
Function-Space Bayesian Deep Learning for Sequential Learning
Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining prior knowledge. While Gaussian processes elegantly tackle these problems, they struggle with scalability and handling rich inputs, such as images.
Aidan Scannell
,
Riccardo Mereu
,
Paul Chang
,
Ella Tamir
,
Joni Pajarinen
,
Arno Solin
PDF
Code
Website
(Function-space) Laplace Approximation for Bayesian Neural Networks
In this talk, I’ll present an overview of the Laplace approximation for quantifying uncertainty in Bayesian neural networks. …
Oct 3, 2023 4:30 PM — 5:30 PM
Zoom
Aidan Scannell
Slides
Neural Networks as Sparse Gaussian Processes for Sequential Learning
I will be presenting our research on bayesian deep learning for sequential learning at the
International Workshop of Intelligent Autonomous Learning Systems 2023
.
Aug 15, 2023 2:50 PM — 3:05 PM
Darmstädter Haus and Sporthotel Walliser, Kleinwalsertal, Austria
Aidan Scannell
Poster
Slides
Sparse Function-space Representation of Neural Networks
Deep neural networks (NNs) are known to lack uncertainty estimates and struggle to incorporate new data. We present a method that …
Aidan Scannell
,
Riccardo Mereu
,
Paul Chang
,
Ella Tamir
,
Joni Pajarinen
,
Arno Solin
PDF
Cite
Code
Poster
Website
Mode-constrained Model-based Reinforcement Learning via Gaussian Processes
We present a model-based RL algorithm that constrains training to a single dynamic mode with high probability. This is a difficult problem because the mode constraint is a hidden variable associated with the environment’s dynamics. As such, it is 1) unknown a priori and 2) we do not observe its output from the environment, so cannot learn it with supervised learning.
Aidan Scannell
,
Carl Henrik Ek
,
Arthur Richards
PDF
Cite
Code
Project
Poster
Source Document
Follow
»
Cite
×