Pseudorandomness for approximate counting and sampling

Ronen Shaltiel, Christopher Umans

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We study computational procedures that use both randomness and nondeterminism. Examples are Arthur-Merlin games and approximate counting and sampling of NP-witnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to "boost" a given hardness assumption. One special case is a proof that EXP ⊈ NP/poly ⇒ EXP ⊈ P II NP/poly. In words, if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent In addition to simplifying the framework of AM derandomization, we show that this "unified assumption" suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the "boosting" theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use non-adaptive NP-queries and perform the following tasks: approximate counting of NP-witnesses: given a Boolean circuit A, output r such that (1-ε)|A -1(1)|≤r≤|A -1(1)|. pseudorandom sampling of NP-witnesses: given a Boolean circuit A, produce a polynomial-size sample space that is computationally indistinguishable from the uniform distribution over A -1 (1). We also present applications. For example, we observe that Cai's proof that S 2 P ⊆ ZPP NP and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under the assumption stated above, which is weaker than the assumption that was previously known to suffice.

Original languageEnglish
Title of host publicationProceedings of the 20th Annual IEEE Conference on Computational Complexity
Pages212-226
Number of pages15
DOIs
StatePublished - 2005
Event20th Annual IEEE Conference on Computational Complexity - San Jose, CA, United States
Duration: 11 Jun 200515 Jun 2005

Publication series

NameProceedings of the Annual IEEE Conference on Computational Complexity
ISSN (Print)1093-0159

Conference

Conference20th Annual IEEE Conference on Computational Complexity
Country/TerritoryUnited States
CitySan Jose, CA
Period11/06/0515/06/05

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'Pseudorandomness for approximate counting and sampling'. Together they form a unique fingerprint.

Cite this