Aidan Scannell

Postdoctoral researcher | Machine learning | Sequential decision making | Robotics

Biography

Hello, my name is Aidan Scannell and I am a postdoctoral researcher with interests at the intersection of machine learning, sequential decision-making, and robotics. My research focuses on developing autonomous agents capable of learning behaviors to solve a wide range of tasks. I am particularly interested in using natural language instructions to guide these agents and advancing robotics foundation models, especially foundation world models, to enable agents to solve new challenges quickly and effectively.

Bio:

I am a Research Associate at the University of Edinburgh in The Bayesian and Neural Systems Group working with Amos Storkey, Stefano Albrecht and Peter Bell. Previously I was a Finnish Center for Artificial Intelligence postdoctoral researcher at Aalto University in Joni Pajarinen’s Robot Learning Lab and Arno Solin’s Machine Learning Research Group. I obtained my PhD from the University of Bristol under the supervision of Arthur Richards and Carl Henrik Ek. During my PhD I developed methods for controlling quadcopters in uncertain environments by synergising methods from probabilistic machine learning, stochastic differential geometry and reinforcement learning.

Interests
  • Reinforcement learning
  • Embodied AI
  • Representation learning
  • World models
  • Robotics
Education
  • PhD Robotics and Autonomous Systems, 2022

    University of Bristol, UK

  • MEng Mechanical Engineering, 2016

    University of Bristol, UK

Recent News

[26.02.25] New preprint (led by Yi Zhao) Generalist World Model Pre-Training for Efficient Reinforcement Learning

[22.01.25] New paper accepted to ICLR 2025 - “Discrete Codebook World Models for Continuous Control”

[06.01.25] Started as a Research Associate at The University of Edinburgh

[19.12.24] New paper (led by Mohammadreza Nakhaeinezhadfard) accepted to AAAI 2025 - “Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning”.

[12.10.24] Giving a talk at Nordic AI Meet + AI Day 2024 - Sample-Efficient Reinforcement Learning with Implicitly Quantized Representations

ALL NEWS»

Recent Publications

Quickly discover relevant content by filtering publications.
(2025). Generalist World Model Pre-Training for Efficient Reinforcement Learning. arXiv preprint arXiv:2502.19544.

PDF Cite

(2025). Discrete Codebook World Models for Continuous Control. The Thirteenth International Conference on Learning Representations (ICLR).

PDF Cite Code Slides Video Website

(2025). Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence.

PDF Cite Code

(2024). iQRL - Implicitly Quantized Representations for Sample-efficient Reinforcement Learning. arXiv preprint arXiv:2406.02696.

PDF Cite Website

(2024). Quantized Representations Prevent Dimensional Collapse in Self-predictive RL. ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists (ARLET).

Cite Website

Projects

.js-id-research

Contact