Determinant Maximization via Matroid Intersection Algorithms (2207.04318v1)
Abstract: Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics \cite{pukelsheim2006optimal}, convex geometry \cite{Khachiyan1996}, fair allocations\linebreak \cite{anari2016nash}, combinatorics \cite{AnariGV18}, spectral graph theory \cite{nikolov2019proportional}, network design, and random processes \cite{kulesza2012determinantal}. In an instance of a determinant maximization problem, we are given a collection of vectors $U={v_1,\ldots, v_n} \subset \RRd$, and a goal is to pick a subset $S\subseteq U$ of given vectors to maximize the determinant of the matrix $\sum_{i\in S} v_i v_i\top $. Often, the set $S$ of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint $\left(|S|\leq k\right)$ or matroid constraint ($S$ is a basis of a matroid defined on the vectors). In this paper, we give a polynomial-time deterministic algorithm that returns a $r{O(r)}$-approximation for any matroid of rank $r\leq d$. This improves previous results that give $e{O(r2)}$-approximation algorithms relying on $e{O(r)}$-approximate \emph{estimation} algorithms \cite{NikolovS16,anari2017generalization,AnariGV18,madan2020maximizing} for any $r\leq d$. All previous results use convex relaxations and their relationship to stable polynomials and strongly log-concave polynomials. In contrast, our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improve any solution by finding an \emph{alternating negative cycle} in the \emph{exchange graph} defined by the matroids. While the $\det(.)$ function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm.