Aidan Scannell

Postdoctoral researcher | Machine learning | Sequential decision making | Robotics

Biography

Hello, my name is Aidan Scannell and I am a postdoctoral researcher with interests at the intersection of machine learning, sequential decision-making, and robotics. My research focuses on developing autonomous agents capable of learning behaviors to solve a wide range of tasks. I am particularly interested in using natural language instructions to guide these agents and advancing robotics foundation models, especially foundation world models, to enable agents to solve new challenges quickly and effectively.

Bio:

I am a Finnish Center for Artificial Intelligence postdoctoral researcher at Aalto University in Joni Pajarinen’s Robot Learning Lab and Arno Solin’s Machine Learning Research Group. I obtained my PhD from the University of Bristol under the supervision of Arthur Richards and Carl Henrik Ek. During my PhD I developed methods for controlling quadcopters in uncertain environments by synergising methods from probabilistic machine learning, stochastic differential geometry and reinforcement learning.

Interests
  • Reinforcement learning
  • Embodied AI
  • Representation learning
  • World models
  • Robotics
Education
  • PhD Robotics and Autonomous Systems, 2022

    University of Bristol, UK

  • MEng Mechanical Engineering, 2016

    University of Bristol, UK

Recent News

[23.07.24] Giving a talk at IWIALS 2024 - iQRL: Implicitly Quantized Representations for Sample-Efficient Reinforcement Learning

[04.07.24] Giving a talk at Nordic AI Meet + AI Day 2024 - Sample-Efficient Reinforcement Learning with Implicitly Quantized Representations

[24.06.24] New paper accepted to ICML 2024 Workshop on Aligning Reinforcement Learning Experimentalists and Theorists (ARLET) - “Quantized Representations Prevent Dimensional Collapse in Self-predictive RL”

[19.06.24] Giving a lecture on “Model-based RL” at the Cambridge Ellis Unit Summer School on Probabilistic Machine Learning 2024

[12.06.24] New paper on arXiv - iQRL - Implicitly Quantized Representations for Sample-efficient Reinforcement Learning

ALL NEWS»

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). iQRL - Implicitly Quantized Representations for Sample-efficient Reinforcement Learning. arXiv.

PDF Cite

(2024). Quantized Representations Prevent Dimensional Collapse in Self-predictive RL. ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists (ARLET).

Cite

(2024). Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning. 4th Annual Conference on Learning for Dynamics and Control (L4DC).

PDF Cite Code

(2024). Function-space Parameterization of Neural Networks for Sequential Learning. The Twelth International Conference on Learning Representations (ICLR).

PDF Cite Code Poster Slides Website

(2023). Sparse Function-space Representation of Neural Networks. ICML 2023 Workshop on Duality Principles for Modern Machine Learning.

PDF Cite Code Poster Website

Projects

.js-id-research

Contact