Sparse Function-space Representation of Neural Networks

21 Jul, 2023·
Aidan Scannell
Equal contribution
,
Riccardo Mereu
Equal contribution
,
Paul Chang
,
Ella Tamir
,
Joni Pajarinen
,
Arno Solin
· 0 min read
Abstract
Deep neural networks (NNs) are known to lack uncertainty estimates and struggle to incorporate new data. We present a method that mitigates these issues by converting NNs from weight space to function space, via a dual parameterization. Importantly, the dual parameterization enables us to formulate a sparse representation that captures information from the entire data set. This offers a compact and principled way of capturing uncertainty and enables us to incorporate new data without retraining whilst retaining predictive performance. We provide proof-of-concept demonstrations with the proposed approach for quantifying uncertainty in supervised learning on UCI benchmark tasks.
Type
Publication
In ICML 2023 Workshop on Duality Principles for Modern Machine Learning