On Hardness Assumptions Needed for “Extreme High-End” PRGs and Fast Derandomization

Ronen Shaltiel, Emanuele Viola

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G : {0, 1}r → {0, 1}m that fool circuits of size m, assuming the existence of explicit hard functions. A “high-end PRG” with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < β < 1 < B, and functions computable in time 2B·n that cannot be computed by circuits of size 2β·n. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct “extreme high-end PRGs” with seed length r = (1 + o(1)) · log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which β = 1−o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. The construction of Chen and Tell composes two PRGs: (Equation presented) and (Equation presented). The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m1+o(1), and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2 + 1/m, reductions must make Ω(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G2 from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.

Original languageEnglish
Title of host publication13th Innovations in Theoretical Computer Science Conference, ITCS 2022
EditorsMark Braverman
PublisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
ISBN (Electronic)9783959772174
DOIs
StatePublished - 1 Jan 2022
Event13th Innovations in Theoretical Computer Science Conference, ITCS 2022 - Berkeley, United States
Duration: 31 Jan 20223 Feb 2022

Publication series

NameLeibniz International Proceedings in Informatics, LIPIcs
Volume215
ISSN (Print)1868-8969

Conference

Conference13th Innovations in Theoretical Computer Science Conference, ITCS 2022
Country/TerritoryUnited States
CityBerkeley
Period31/01/223/02/22

Bibliographical note

Funding Information:
Ronen Shaltiel: This research was supported by ISF grant 1628/17. Emanuele Viola: Supported by NSF CCF award 1813930 and and NSF CCF award 2114116.

Publisher Copyright:
© Ronen Shaltiel and Emanuele Viola; licensed under Creative Commons License CC-BY 4.0

Keywords

  • Black-box proofs
  • Complexity Theory
  • Derandomization
  • Pseudorandom generators

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'On Hardness Assumptions Needed for “Extreme High-End” PRGs and Fast Derandomization'. Together they form a unique fingerprint.

Cite this