Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Geometric Learning with Positively Decomposable Kernels (2310.13821v2)

Published 20 Oct 2023 in cs.LG, math.DG, and stat.ML

Abstract: Kernel methods are powerful tools in machine learning. Classical kernel methods are based on positive-definite kernels, which map data spaces into reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces, positive-definite kernels are difficult to come by. In this case, we propose the use of reproducing kernel Krein space (RKKS) based methods, which require only kernels that admit a positive decomposition. We show that one does not need to access this decomposition in order to learn in RKKS. We then investigate the conditions under which a kernel is positively decomposable. We show that invariant kernels admit a positive decomposition on homogeneous spaces under tractable regularity assumptions. This makes them much easier to construct than positive-definite kernels, providing a route for learning with kernels for non-Euclidean data. By the same token, this provides theoretical foundations for RKKS-based methods in general.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Cyrus Mostajeran (28 papers)
  2. Juan-Pablo Ortega (43 papers)
  3. Salem Said (28 papers)
  4. Nathael Da Costa (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.