Aidan Scannell 🚀

Aidan Scannell

(he/him)

Research Associate

University of Edinburgh

Bayesian and Neural Systems Group

Biography

Hello, I’m Aidan Scannell, a research associate working at the intersection of machine learning, sequential decision-making, and embodied AI. My research is driven by the goal of building autonomous agents that can learn and generalize behaviours across a wide range of tasks. I’m particularly interested in methods and architectures for learning world models, and in understanding how agents can leverage them to solve new tasks efficiently.

Bio:

I am a Research Associate at the University of Edinburgh in The Bayesian and Neural Systems Group working with Amos Storkey and Peter Bell. Previously I was a Finnish Center for Artificial Intelligence postdoctoral researcher at Aalto University in Joni Pajarinen’s Robot Learning Lab and Arno Solin’s Machine Learning Research Group. I obtained my PhD from the University of Bristol under the supervision of Arthur Richards and Carl Henrik Ek. During my PhD I developed methods for controlling quadcopters in uncertain environments by synergising methods from probabilistic machine learning, stochastic differential geometry and reinforcement learning.

Interests

World models Generative models Reinforcement learning Representation learning Embodied AI Robotics
Recent News

[03.10.25] We’re top of the leaderboard in the 1X Humanoid World Model Challenge. See here for more details.

[18.05.25] New preprint (led by Yi Zhao) Efficient Reinforcement Learning by Guiding Generalist World Models with Non-Curated Data.

[26.02.25] New paper accepted to ICLR 2025 Workshop on World Models: Understanding, Modelling and Scaling (led by Yi Zhao) Generalist World Model Pre-Training for Efficient Reinforcement Learning.

[22.01.25] New paper accepted to ICLR 2025 - “Discrete Codebook World Models for Continuous Control”.

[06.01.25] Started as a Research Associate at The University of Edinburgh.

All news →

Recent Publications
(2025). Efficient Reinforcement Learning by Guiding Generalist World Models with Non-Curated Data. arXiv preprint arXiv:2502.19544v2.
(2025). Generalist World Model Pre-Training for Efficient Reinforcement Learning. ICLR 2025 Workshop on World Models: Understanding, Modelling and Scaling.
PDF
(2025). Discrete Codebook World Models for Continuous Control. In The Thirteenth International Conference on Learning Representations (ICLR).
(2025). Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning. In AAAI 2025".
(2024). iQRL - Implicitly Quantized Representations for Sample-efficient Reinforcement Learning. arXiv preprint arXiv:2406.02696.
Recent & Upcoming Talks
Generative World Modelling for Humanoids: 1X World Model Challenge featured image

Generative World Modelling for Humanoids: 1X World Model Challenge

Presenting our methods for winning both tracks of the 1X world model challenge.

avatar
Aidan Scannell
•
Read more
Projects
1X World Model Challenge featured image

1X World Model Challenge

Introduction World models equip agents (e.g., humanoid robots) with internal simulators of their environments. By “imagining” the consequences of their actions, agents can plan, …

Aidan Scannell
•
Read more
Discrete Codebook World Models featured image

Discrete Codebook World Models

In reinforcement learning (RL), world models serve as internal simulators, enabling agents to predict environment dynamics and future outcomes in order to make informed decisions. …

Aidan Scannell
•
Read more
Implicitly Quantized Representations for Reinforcement Learning featured image

Implicitly Quantized Representations for Reinforcement Learning

Learning representations for reinforcement learning (RL) has shown much promise for continuous control. In this project, we investigate using vector quantization to prevent …

Aidan Scannell
•
Read more