2000 character limit reached
Reachability Is NP-Complete Even for the Simplest Neural Networks (2108.13179v2)
Published 30 Aug 2021 in cs.CC and cs.LG
Abstract: We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters.