Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Determinant Maximization for All Matroids (2211.10507v1)

Published 18 Nov 2022 in cs.DS and math.CO

Abstract: Determinant maximization provides an elegant generalization of problems in many areas, including convex geometry, statistics, machine learning, fair allocation of goods, and network design. In an instance of the determinant maximization problem, we are given a collection of vectors $v_1,\ldots, v_n \in \mathbb{R}d$, and the goal is to pick a subset $S\subseteq [n]$ of given vectors to maximize the determinant of the matrix $\sum_{i \in S} v_iv_i\top$, where the picked set of vectors $S$ must satisfy some combinatorial constraint such as cardinality constraint ($|S| \leq k$) or matroid constraint ($S$ is a basis of a matroid defined on $[n]$). In this work, we give a combinatorial algorithm for the determinant maximization problem under a matroid constraint that achieves $O(d{O(d)})$-approximation for any matroid of rank $r\geq d$. This complements the recent result of~\cite{BrownLPST22} that achieves a similar bound for matroids of rank $r\leq d$, relying on a geometric interpretation of the determinant. Our result matches the best-known estimation algorithms~\cite{madan2020maximizing} for the problem, which could estimate the objective value but could not give an approximate solution with a similar guarantee. Our work follows the framework developed by~\cite{BrownLPST22} of using matroid intersection based algorithms for determinant maximization. To overcome the lack of a simple geometric interpretation of the objective when $r \geq d$, our approach combines ideas from combinatorial optimization with algebraic properties of the determinant. We also critically use the properties of a convex programming relaxation of the problem introduced by~\cite{madan2020maximizing}.

Summary

We haven't generated a summary for this paper yet.