Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variational Learning on Aggregate Outputs with Gaussian Processes (1805.08463v1)

Published 22 May 2018 in stat.ML, cs.LG, stat.AP, and stat.ME

Abstract: While a typical supervised learning framework assumes that the inputs and the outputs are measured at the same levels of granularity, many applications, including global mapping of disease, only have access to outputs at a much coarser level than that of the inputs. Aggregation of outputs makes generalization to new inputs much more difficult. We consider an approach to this problem based on variational learning with a model of output aggregation and Gaussian processes, where aggregation leads to intractability of the standard evidence lower bounds. We propose new bounds and tractable approximations, leading to improved prediction accuracy and scalability to large datasets, while explicitly taking uncertainty into account. We develop a framework which extends to several types of likelihoods, including the Poisson model for aggregated count data. We apply our framework to a challenging and important problem, the fine-scale spatial modelling of malaria incidence, with over 1 million observations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ho Chung Leon Law (5 papers)
  2. Dino Sejdinovic (88 papers)
  3. Ewan Cameron (21 papers)
  4. Tim CD Lucas (1 paper)
  5. Seth Flaxman (49 papers)
  6. Katherine Battle (1 paper)
  7. Kenji Fukumizu (89 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com