Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

A Learning Theory in Linear Systems under Compositional Models (1807.00084v1)

Published 29 Jun 2018 in stat.ML and cs.LG

Abstract: We present a learning theory for the training of a linear system operator having an input compositional variable and propose a Bayesian inversion method for inferring the unknown variable from an output of a noisy linear system. We assume that we have partial or even no knowledge of the operator but have training data of input and ouput. A compositional variable satisfies the constraints that the elements of the variable are all non-negative and sum to unity. We quantified the uncertainty in the trained operator and present the convergence rates of training in explicit forms for several interesting cases under stochastic compositional models. The trained linear operator with the covariance matrix, estimated from the training set of pairs of ground-truth input and noisy output data, is further used in evaluation of posterior uncertainty of the solution. This posterior uncertainty clearly demonstrates uncertainty propagation from noisy training data and addresses possible mismatch between the true operator and the estimated one in the final solution.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.