## Abstract

The leading technical approach in uniform hardness-to-randomness in the last two decades faced several well-known barriers that caused results to rely on overly strong hardness assumptions, and yet still yield suboptimal conclusions.

In this work we show uniform hardness-to-randomness results that *simultaneously break through all of the known barriers*. Specifically, consider any one of the following three assumptions:

1. For some 0 there exists a function f computable by uniform circuits of size 2O(n) and depth 2o(n) such that f is hard for probabilistic time 2n.

2. For every cN there exists a function f computable by logspace-uniform circuits of polynomial size and depth n2 such that every probabilistic algorithm running in time nc fails to compute f on a (1n) -fraction of the inputs.

2. For every cN there exists a logspace-uniform family of arithmetic formulas of degree n2 over a field of size poly(n) such that no algorithm running in probabilistic time nc can evaluate the family on a worst-case input.

Assuming any of these hypotheses, where the hardness is for every sufficiently large input length nN, we deduce that can be derandomized in *polynomial time and on *all input lengths*, on average. Furthermore, under the first assumption we also show that can be derandomized in polynomial time, on average and on all input lengths, with logarithmically many advice bits.

On the way to these results we also resolve two related open problems. First, we obtain an *optimal worst-case to average-case reduction* for computing problems in linear space by uniform probabilistic algorithms; this result builds on a new instance checker based on the doubly efficient proof system of Goldwasser, Kalai, and Rothblum (J. ACM, 2015). Secondly, we resolve the main open problem in the work of Carmosino, Impagliazzo and Sabin (ICALP 2018), by deducing derandomization from weak and general fine-grained hardness hypotheses.

In this work we show uniform hardness-to-randomness results that *simultaneously break through all of the known barriers*. Specifically, consider any one of the following three assumptions:

1. For some 0 there exists a function f computable by uniform circuits of size 2O(n) and depth 2o(n) such that f is hard for probabilistic time 2n.

2. For every cN there exists a function f computable by logspace-uniform circuits of polynomial size and depth n2 such that every probabilistic algorithm running in time nc fails to compute f on a (1n) -fraction of the inputs.

2. For every cN there exists a logspace-uniform family of arithmetic formulas of degree n2 over a field of size poly(n) such that no algorithm running in probabilistic time nc can evaluate the family on a worst-case input.

Assuming any of these hypotheses, where the hardness is for every sufficiently large input length nN, we deduce that can be derandomized in *polynomial time and on *all input lengths*, on average. Furthermore, under the first assumption we also show that can be derandomized in polynomial time, on average and on all input lengths, with logarithmically many advice bits.

On the way to these results we also resolve two related open problems. First, we obtain an *optimal worst-case to average-case reduction* for computing problems in linear space by uniform probabilistic algorithms; this result builds on a new instance checker based on the doubly efficient proof system of Goldwasser, Kalai, and Rothblum (J. ACM, 2015). Secondly, we resolve the main open problem in the work of Carmosino, Impagliazzo and Sabin (ICALP 2018), by deducing derandomization from weak and general fine-grained hardness hypotheses.

Original language | Undefined/Unknown |
---|---|

Number of pages | 11 |

Volume | 53 |

Edition | 3 |

DOIs | |

State | Published - 8 Jul 2022 |

### Publication series

Name | SIGACT News |
---|---|

Publisher | Association for Computing Machinery |

ISSN (Print) | 0163-5700 |