Properly learning monotone functions via local reconstruction (2204.11894v3)
Abstract: We give a $2{\tilde{O}(\sqrt{n}/\epsilon)}$-time algorithm for properly learning monotone Boolean functions under the uniform distribution over ${0,1}n$. Our algorithm is robust to adversarial label noise and has a running time nearly matching that of the state-of-the-art improper learning algorithm of Bshouty and Tamon (JACM '96) and an information-theoretic lower bound of Blais et al (RANDOM '15). Prior to this work, no proper learning algorithm with running time smaller than $2{\Omega(n)}$ was known to exist. The core of our proper learner is a \emph{local computation algorithm} for sorting binary labels on a poset. Our algorithm is built on a body of work on distributed greedy graph algorithms; specifically we rely on a recent work of Ghaffari (FOCS'22), which gives an efficient algorithm for computing maximal matchings in a graph in the LCA model of Rubinfeld et al and Alon et al (ICS'11, SODA'12). The applications of our local sorting algorithm extend beyond learning on the Boolean cube: we also give a tolerant tester for Boolean functions over general posets that distinguishes functions that are $\epsilon/3$-close to monotone from those that are $\epsilon$-far. Previous tolerant testers for the Boolean cube only distinguished between $\epsilon/\Omega(\sqrt{n})$-close and $\epsilon$-far.