Frameworks for Solving Turing Kernel Lower Bound Problem and Finding Natural Candidate Problems in NP-intermediate (1609.05472v2)
Abstract: Kernelization is a significant topic in parameterized complexity. Turing kernelization is a general form of kernelization. In the aspect of kernelization, an impressive hardness theory has been established [Bodlaender etc. (ICALP 2008, JCSS2009), Fortnow and Santhanam (STOC 2008, JCSS 2011), Dell and van Melkebeek (STOC 2010, J. ACM 2014), Drucker (FOCS 2012, SIAM J. Comput. 2015)], which can obtain lower bounds of kernel size. Unfortunately, there is yet no tool can prove Turing kernel lower bound for any FPT problem modulo any reasonable complexity hypothesis. Thus, constructing a framework for Turing kernel lower bound was proposed as an open problem in different occasions [Fernau etc. (STACS 2009), Misra etc. (Discrete Optimization 2011), Kratsch (Bulletin of the EATCS 2014), Cygan etc. (Dagstuhl Seminars on kernels 2014)]. Ladner [J. ACM 1975] proved that if $P \not = NP$, then there exist infinitely many NP-intermediate problems. However, the NP-intermediate problem constructed by Ladner is artificial. Thus, finding natural NP-intermediate problems under the assumption of $P \not= NP$ is a longstanding open problem in computation complexity community. This paper builds a new bridge between parameterized complexity and classic computational complexity. By using this new connection, some frameworks can be constructed. Based on the assumption that the polynomial hierarchy and the exponential hierarchy will not collapse, these frameworks have three main applications. Firstly, these frameworks can be used to obtain Turing kernel lower bounds of some important FPT problems, thus solving the first open problem. Secondly, these frameworks can also be used to obtain better kernel lower bounds for these problems. Thirdly, these frameworks can be used to figure out a large number of natural problems in NP-intermediate, thus making some contributions to the second open problem.