Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Information theoretical clustering is hard to approximate (1812.07075v2)

Published 17 Dec 2018 in cs.DS

Abstract: An impurity measures $I: \mathbb{R}d \mapsto \mathbb{R}+$ is a function that assigns a $d$-dimensional vector ${\bf v}$ to a non-negative value $I({\bf v})$ so that the more homogeneous ${\bf v}$, with respect to the values of its coordinates, the larger its impurity. A well known example of impurity measures is the Entropy impurity. We study the problem of clustering based on impurity measures. Let $V$ be a collection of $n$ many $d$-dimensional vectors with non-negative components. Given $V$ and an impurity measure $I$, the goal is to find a partition ${\mathcal P}$ of $V$ into $k$ groups $V_1,\ldots,V_k$ so as to minimize the sum of the impurities of the groups in ${\cal P}$, i.e., $I({\cal P})= \sum_{i=1}{k} I\bigg(\sum_{ {\bf v} \in V_i} {\bf v} \bigg).$ Impurity minimization has been widely used as quality assessment measure in probability distribution clustering (KL-divergence) as well as in categorical clustering. However, in contrast to the case of metric based clustering, the current knowledge of impurity measure based clustering in terms of approximation and inapproximability results is very limited. Here, we contribute to change this scenario by proving that for the Entropy impurity measure the problem does not admit a PTAS even when all vectors have the same $\ell_1$ norm. This result solves a question that remained open in previous work on this topic [Chaudhuri and McGregor COLT 08; Ackermann et. al. ECCC 11].

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ferdinando Cicalese (30 papers)
  2. Eduardo Laber (12 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.