Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Teaching Dimension of Kernel Perceptron (2010.14043v2)

Published 27 Oct 2020 in cs.LG and cs.AI

Abstract: Algorithmic machine teaching has been studied under the linear setting where exact teaching is possible. However, little is known for teaching nonlinear learners. Here, we establish the sample complexity of teaching, aka teaching dimension, for kernelized perceptrons for different families of feature maps. As a warm-up, we show that the teaching complexity is $\Theta(d)$ for the exact teaching of linear perceptrons in $\mathbb{R}d$, and $\Theta(dk)$ for kernel perceptron with a polynomial kernel of order $k$. Furthermore, under certain smooth assumptions on the data distribution, we establish a rigorous bound on the complexity for approximately teaching a Gaussian kernel perceptron. We provide numerical examples of the optimal (approximate) teaching set under several canonical settings for linear, polynomial and Gaussian kernel perceptrons.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Akash Kumar (87 papers)
  2. Hanqi Zhang (5 papers)
  3. Adish Singla (96 papers)
  4. Yuxin Chen (195 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.