Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximation results regarding the multiple-output mixture of linear experts model (1704.00946v4)

Published 4 Apr 2017 in stat.ME

Abstract: Mixture of experts (MoE) models are a class of artificial neural networks that can be used for functional approximation and probabilistic modeling. An important class of MoE models is the class of mixture of linear experts (MoLE) models, where the expert functions map to real topological output spaces. There are a number of powerful approximation results regarding MoLE models, when the output space is univariate. These results guarantee the ability of MoLE mean functions to approximate arbitrary continuous functions, and MoLE models themselves to approximate arbitrary conditional probability density functions. We utilize and extend upon the univariate approximation results in order to prove a pair of useful results for situations where the output spaces are multivariate.

Summary

We haven't generated a summary for this paper yet.