TY - GEN
T1 - Pseudorandomness for approximate counting and sampling
AU - Shaltiel, Ronen
AU - Umans, Christopher
PY - 2005
Y1 - 2005
N2 - We study computational procedures that use both randomness and nondeterminism. Examples are Arthur-Merlin games and approximate counting and sampling of NP-witnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to "boost" a given hardness assumption. One special case is a proof that EXP ⊈ NP/poly ⇒ EXP ⊈ P II NP/poly. In words, if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent In addition to simplifying the framework of AM derandomization, we show that this "unified assumption" suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the "boosting" theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use non-adaptive NP-queries and perform the following tasks: approximate counting of NP-witnesses: given a Boolean circuit A, output r such that (1-ε)|A -1(1)|≤r≤|A -1(1)|. pseudorandom sampling of NP-witnesses: given a Boolean circuit A, produce a polynomial-size sample space that is computationally indistinguishable from the uniform distribution over A -1 (1). We also present applications. For example, we observe that Cai's proof that S 2 P ⊆ ZPP NP and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under the assumption stated above, which is weaker than the assumption that was previously known to suffice.
AB - We study computational procedures that use both randomness and nondeterminism. Examples are Arthur-Merlin games and approximate counting and sampling of NP-witnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to "boost" a given hardness assumption. One special case is a proof that EXP ⊈ NP/poly ⇒ EXP ⊈ P II NP/poly. In words, if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent In addition to simplifying the framework of AM derandomization, we show that this "unified assumption" suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the "boosting" theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use non-adaptive NP-queries and perform the following tasks: approximate counting of NP-witnesses: given a Boolean circuit A, output r such that (1-ε)|A -1(1)|≤r≤|A -1(1)|. pseudorandom sampling of NP-witnesses: given a Boolean circuit A, produce a polynomial-size sample space that is computationally indistinguishable from the uniform distribution over A -1 (1). We also present applications. For example, we observe that Cai's proof that S 2 P ⊆ ZPP NP and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under the assumption stated above, which is weaker than the assumption that was previously known to suffice.
UR - http://www.scopus.com/inward/record.url?scp=27644565809&partnerID=8YFLogxK
U2 - 10.1109/CCC.2005.26
DO - 10.1109/CCC.2005.26
M3 - Conference contribution
AN - SCOPUS:27644565809
SN - 0769523641
T3 - Proceedings of the Annual IEEE Conference on Computational Complexity
SP - 212
EP - 226
BT - Proceedings of the 20th Annual IEEE Conference on Computational Complexity
T2 - 20th Annual IEEE Conference on Computational Complexity
Y2 - 11 June 2005 through 15 June 2005
ER -