Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multiple Locally Linear Kernel Machines

Published 17 Jan 2024 in cs.LG and stat.ML | (2401.09629v1)

Abstract: In this paper we propose a new non-linear classifier based on a combination of locally linear classifiers. A well known optimization formulation is given as we cast the problem in a $\ell_1$ Multiple Kernel Learning (MKL) problem using many locally linear kernels. Since the number of such kernels is huge, we provide a scalable generic MKL training algorithm handling streaming kernels. With respect to the inference time, the resulting classifier fits the gap between high accuracy but slow non-linear classifiers (such as classical MKL) and fast but low accuracy linear classifiers.

Authors (1)
Citations (1)

Summary

  • The paper introduces a novel classification method that combines locally linear kernels with Multiple Kernel Learning to achieve high accuracy with reduced computational cost.
  • The proposed SequentialMKL algorithm efficiently selects a sparse subset of kernels, ensuring fast inference and minimal memory usage.
  • Experimental evaluations demonstrate that MLLKM delivers performance comparable to non-linear classifiers while maintaining inference speeds similar to linear SVMs.

Overview

The paper introduces a novel non-linear classification strategy that bridges the performance gap between highly accurate but computationally intensive non-linear classifiers and fast but less accurate linear classifiers. The proposed method involves the conception of Multiple Locally Linear Kernel Machines (MLLKM), which employ a blend of locally linear classifiers driven by the Multiple Kernel Learning (MKL) paradigm.

Methodology

MLLKM operates by leveraging a plethora of locally linear kernels that are tied to the concept of conformal kernels. Given the vast number of kernels, the paper outlines an efficient MKL training algorithm that handles streaming kernels, making the approach scalable and computationally feasible. During the inference phase, the classifier promises a swift performance almost akin to linear SVMs due to a deliberate ℓ1-norm constraint that induces sparsity in kernel selection, resulting in a reduced number of kernels needed for classification tasks.

Algorithmic Approach

To select and combine these kernels, every training sample is initially considered as a potential anchor point around which a locally linear kernel is defined. The MLLKM framework then selects a sparse subset from these kernels, ensuring the inference remains efficient. The innovative training algorithm presented—SequentialMKL—alternatively optimizes a reduced set of active kernels and the blend of these kernels. This algorithm successfully contends with the issue of the high number of kernels, which is often prohibitive in traditional MKL methods where processing and memory storage become bottlenecks.

Empirical Evaluation

The effectiveness of MLLKM is substantiated through experiments on both synthetic and real-world datasets. The findings show that MLLKM performs comparably to non-linear methods in terms of accuracy while outperforming linear methods substantially. Additionally, the inference time closely rivals that of linear SVMs. From a practical standpoint, MLLKM also fosters advantages in terms of computational costs for inference—not only in operational time but also in storage requirements, which make it favorable for deployment in memory-constrained environments.

In essence, the paper proposes a sophisticated approach that achieves an attractive trade-off between accuracy and inference efficiency without the burdensome demands on computational resources typically associated with non-linear classifiers. It signifies a progressive step in machine learning, particularly for applications where both accuracy and swift decision-making are crucial.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 40 likes about this paper.