Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 36 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 461 tok/s Pro
Kimi K2 191 tok/s Pro
2000 character limit reached

On the Expressiveness of Multi-Neuron Convex Relaxations (2410.06816v2)

Published 9 Oct 2024 in cs.LG and cs.AI

Abstract: To provide robustness guarantees, neural network certification methods heavily rely on convex relaxations. The imprecision of these convex relaxations, however, is a major obstacle: even the most precise single-neuron relaxation is incomplete for general ReLU networks, a phenomenon referred to as the single-neuron convex barrier. While heuristic instantiations of multi-neuron relaxations have been proposed to circumvent this barrier in practice, their theoretical properties remain largely unknown. In this work, we conduct the first rigorous study of the expressiveness of multi-neuron relaxations. We first show that the ``$\max$'' function in $\mathbb{R}d$ can be encoded by a ReLU network and exactly bounded by a multi-neuron relaxation, which is impossible for any single-neuron relaxation. Further, we prove that multi-neuron relaxations can be turned into complete verifiers by semantic-preserving structural transformations or by input space partitioning that enjoys improved worst-case partition complexity. We also show that without these augmentations, the completeness guarantee can no longer be obtained, and the relaxation error of every multi-neuron relaxation can be unbounded. To the best of our knowledge, this is the first work to provide an extensive characterization of multi-neuron relaxations and their expressiveness in neural network certification.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.