Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 102 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 25 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 472 tok/s Pro
Kimi K2 196 tok/s Pro
2000 character limit reached

Learning from Distributions via Support Measure Machines (1202.6504v2)

Published 29 Feb 2012 in stat.ML and cs.LG

Abstract: This paper presents a kernel-based discriminative learning framework on probability measures. Rather than relying on large collections of vectorial training examples, our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data. By representing these probability distributions as mean embeddings in the reproducing kernel Hilbert space (RKHS), we are able to apply many standard kernel-based learning techniques in straightforward fashion. To accomplish this, we construct a generalization of the support vector machine (SVM) called a support measure machine (SMM). Our analyses of SMMs provides several insights into their relationship to traditional SVMs. Based on such insights, we propose a flexible SVM (Flex-SVM) that places different kernel functions on each training example. Experimental results on both synthetic and real-world data demonstrate the effectiveness of our proposed framework.

Citations (193)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

Learning from Distributions via Support Measure Machines

The paper, authored by Krikamol Muandet and colleagues, introduces an innovative kernel-based framework for learning from probability distributions rather than traditional vectorial data points. This approach marks a considerable advancement in the field of discriminative learning algorithms, offering practical solutions in scenarios where data comes in the form of probability measures, which can be more representative and efficient than individual data points.

The framework employs a reproducing kernel Hilbert space (RKHS) to represent probability distributions as mean embeddings, leveraging the capabilities of kernel-based learning techniques. This pivotal idea leads to the design of Support Measure Machines (SMMs), a generalization of the support vector machine (SVM) that operates on probability measures. The paper explores the intricate relationship between SMM and traditional SVM, establishing that SVM can be perceived as a special case of SMM when distributions reduce to point measures.

In this paper, several key contributions stand out:

  1. Regularization Framework: The authors derive a representer theorem tailored to the space of probability distributions, extending the traditional regularization approach used in SVMs. This theorem proves that kernel methods can be efficiently applied to the embedding of probability distributions.
  2. Kernel Development: The authors propose a family of positive definite kernels specifically for distributions. This development enriches the toolbox for dealing with probability measures by providing kernels that are derived flexibly from existing kernels on vector spaces.
  3. Algorithm Connection and Flexibility: The analysis demonstrates a profound connection between sample-based and distribution-based methods, leading to the formulation of a flexible SVM (Flex-SVM). This adaptation allows for differential kernel placement across individual data points, thereby accommodating distributions with diverse characteristics like varying position and scale.
  4. Construction of SMM: The proposed SMM is distinguished by its ability to handle input distributions rather than just input vectors, achieving substantial accuracy improvements in various experimental setups, including both synthetic and real-world datasets.

Empirical evaluations affirm the efficacy of SMMs, demonstrating that the framework offers a robust and scalable approach to learning from distributions. In cases where training data must be represented by distributions, such as uncertain or abundant data in genomics or neuroinformatics, the proposed methods are particularly advantageous. They help reduce computational burdens by summarizing large volumes of data into meaningful probabilistic representations.

Overall, the implications of this research are profound, suggesting a significant shift in how machine learning models could handle complex and varying data formats. Future developments in AI will likely benefit from these methodologies, extending the applications to dynamic fields like climate informatics and personalized medicine, where data is inherently stochastic and voluminous.

The paper's exploration of flexible kernel mappings and representation theorems sets a promising direction for theoretical advancements and practical implementation in machine learning frameworks that operate directly on distributions, providing a solid foundation for further research in this domain.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.