Papers
Topics
Authors
Recent
Search
2000 character limit reached

New boundaries for positive definite functions

Published 27 Nov 2019 in math.FA and math.PR | (1911.12344v1)

Abstract: With view to applications in stochastic analysis and geometry, we introduce a new correspondence for positive definite kernels (p.d.) $K$ and their associated reproducing kernel Hilbert spaces. With this we establish two kinds of factorizations: (i) Probabilistic: Starting with a positive definite kernel $K$ we analyze associated Gaussian processes $V$. Properties of the Gaussian processes will be derived from certain factorizations of $K$, arising as a covariance kernel of $V$. (ii) Geometric analysis: We discuss families of measure spaces arising as boundaries for $K$. Our results entail an analysis of a partial order on families of p.d. kernels, a duality for operators and frames, optimization, Karhunen--Lo`eve expansions, and factorizations. Applications include a new boundary analysis for the Drury-Arveson kernel, and for certain fractals arising as iterated function systems; and an identification of optimal feature spaces in machine learning models.

Authors (2)

Summary

  • The paper establishes novel factorizations for positive definite kernels, linking Gaussian process covariance with geometric boundary measures.
  • It introduces a partial order on kernels to compare associated reproducing kernel Hilbert spaces and construct contraction operators.
  • Applications to the Drury-Arveson kernel and fractals illustrate the approach's potential in advancing machine learning and manifold analysis.

New Boundaries for Positive Definite Functions

Introduction

The paper "New boundaries for positive definite functions" addresses the properties and factorization of positive definite (p.d.) kernels, a key concept in stochastic analysis and geometry. By exploring the connections between p.d. kernels and reproducing kernel Hilbert spaces (RKHS), this research provides insights into their applications in Gaussian processes and geometric analysis.

Positive Definite Kernels and Reproducing Kernel Hilbert Spaces

A positive definite kernel is a function K:X×X→CK: X \times X \rightarrow \mathbb{C} such that for any finite set {xi}i=1N⊂X\{x_i\}_{i=1}^N \subset X, the matrix [K(xi,xj)][K(x_i, x_j)] is positive semidefinite. This property is pivotal in constructing a reproducing kernel Hilbert space, H(K)\mathscr{H}(K), where functions are defined and evaluated using the kernel.

Factorizations and Gaussian Processes

The paper presents two primary kinds of factorizations for p.d. kernels:

  1. Probabilistic Factorization: This involves associating a Gaussian process VxV_x with a p.d. kernel K(x,y)K(x, y). The kernel serves as the covariance function for the process, yielding insights into its probabilistic structure.
  2. Geometric Factorization: This explores measure spaces as boundaries for KK, providing a new perspective on the geometric properties inherent in these kernels.

Partial Order and Operator Theory

A partial order is established on p.d. kernels, facilitating the comparison and analysis of different kernels based on their properties. The paper defines an order relation K≪LK \ll L to indicate that kernel KK is dominated by LL. This ordering enables the construction of contraction operators between the associated RKHSs, offering a framework for understanding relationships between different functional spaces.

Applications to Drury-Arveson Kernel and Fractals

The paper demonstrates the applications of these theoretical insights by analyzing specific examples, such as the Drury-Arveson kernel and iterated function systems (IFS) leading to fractals. These applications underscore the utility of kernel factorizations in complex domains and manifold learning, which is particularly relevant in machine learning where feature spaces are critical.

Implementation Considerations

Implementing the concepts from this paper involves several sophisticated mathematical tools, such as Ito integrals and Gaussian measures, which play a crucial role in defining and analyzing Gaussian processes tied to the kernels. Computational frameworks would benefit from leveraging these mathematical structures to better understand high-dimensional data sets through kernelized methods in machine learning.

Conclusion

The work on new boundaries for positive definite functions extends the theoretical landscape of p.d. kernels and RKHS, introducing critical connections and applications across multiple domains, including probability, geometry, and machine learning. These insights pave the way for more advanced studies on kernel methods and their broad applicability in various scientific fields.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.