Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lower Bounds on the Time/Memory Tradeoff of Function Inversion (2105.00761v4)

Published 3 May 2021 in cs.CC and cs.CR

Abstract: We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an s-bit advice on a randomly chosen function $f : [n] -> [n]$ and using $q$ oracle queries to $f$, tries to invert a randomly chosen output $y$ of $f$, i.e., to find $x\in f{-1}(y)$. Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. HeLLMan [IEEE transactions on Information Theory 80] presented an adaptive inverter that inverts with high probability a random $f$. Fiat and Naor [SICOMP 00] proved that for any $s$, $q$ with $s3q = n$ (ignoring low-order terms), an $s$-advice, $q$-query variant of HeLLMans algorithm inverts a constant fraction of the image points of any function. Yao [STOC 90] proved a lower bound of $sq \geq n$ for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question. Very little is known for the non-adaptive variant of the question. The only known upper bounds, i.e., inverters, are the trivial ones (with $s+q = n$), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC 19] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove. We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters.

Citations (9)

Summary

We haven't generated a summary for this paper yet.