Neural Networks as Sparse Gaussian Processes for Sequential Learning

Abstract

Deep neural networks are known to lack uncertainty estimates, struggle to incorporate new data, and suffer from catastrophic forgetting. In this talk, I’ll present our method that attempts to mitigate these issues by converting neural networks from weight-space to a sparse Gaussian process, via the so-called dual parameters. This offers a compact and principled way of capturing uncertainty and enables us to incorporate new data without retraining whilst retaining predictive performance. I’ll demonstrate the proposed approach for quantifying uncertainty in supervised learning, maintaining an expressive functional representation for continual learning, and guiding exploration in model-based reinforcement learning.

Date
Aug 15, 2023 2:50 PM — 3:05 PM
Location
Darmstädter Haus and Sporthotel Walliser, Kleinwalsertal, Austria