Empirical Evaluation of Neural Process Objectives

Tuan Anh Le, Hyunjik Kim, Marta Garnelo, Dan Rosenbaum, Jonathan Schwarz, Yee Whye Teh

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Neural processes (NPs) [3, 4] are parametric stochastic processes that can be trained from a dataset consisting of sets of input-output pairs. During test time, given a context set of input-output pairs and a set of target inputs, they allow us to approximate the posterior predictive of the target outputs. NPs have shown promise in applications such as image super-resolution, conditional image generation or scalable Bayesian optimization. It is, however, unclear which objective and model specification should be used to train NPs. This abstract empirically evaluates the performance of NPs for different objectives and model specifications. Given that some objectives and model specifications clearly outperform others, our analysis can be useful in guiding future research and applications of NPs.
Original languageEnglish
Title of host publicationBayesian Deep Learning workshop, Neural Information Processing Systems (NeurIPS)
StatePublished - 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'Empirical Evaluation of Neural Process Objectives'. Together they form a unique fingerprint.

Cite this