Fields Institute Distinguished Lecture Series
Avi Wigderson
Institute for Computer Science, Hebrew University
A Computational View of Randomness
Tuesday, April 14, 1998
4:30-5:30 p.m.
The current state of knowledge in Computational Complexity Theory
suggests two strong empirical "facts" (whose truth are the two major
open problems of this field).
- (1) Some natural computational tasks are infeasible
(e.g. it seems so for computing the functions PERMANENT, FACTORING,
CLIQUE, SATISFIABILITY ...)
- (2) Probabilistic algorithms can be much more efficient than
deterministic ones.
(e.g it seems so for PRIMALITY, VERIFYING IDENTITIES, APPROXIMATING
VOLUMES...)
As it does with other notions (e.g. knowledge, proof..), Complexity
Theory attempts to understand the notion of randomness from computational
standpoint. One major achievement of this study is the following (surprising?)
relation between these two "facts" above:
- THEOREM: (1) contradicts (2)
In words: If ANY "natural" problem is "infeasible", then EVERY
probabilistic algorithm can be "efficiently" "derandomized". I plan
to explain the sequence of important ideas, definitions, and techniques
developed in the last 20 years that enable a formal statement and
proof of such a theorem. Many of them, such as the formal notions
of "pseudo-random generator", and "computational indistinguishability"
are of fundamental interest beyond derandomization; they have far
reaching implications on our ability to build efficient cryptographic
systems, as well as our inability to efficiently learn natural concepts
and effectively prove natural mathematical conjectures (such as
(1) above).
Tight Hardness vs. Randomness Trade-offs
Wednesday, April 15, 1998
4:30-5:30 p.m.
The first talk explained that (computational) hardness can be effectively
turned into (computational) randomness. In this talk we will be
stingy and ask exactly how hard (and how easy) must a function be,
so as to achieve complete or partial derandomization of all efficient
probabilistic algorithms. Successively weakening the hardness assumptions
should be viewed as part of a program leading (hopefully) to obtaining
UNCONDITIONAL limits on the power of randomness, such as e.g. BPP
EXP. I will talk mostly on one of the following two recent works
with Russell Impagliazzo. The first work deals with non-uniform
hardness assumptions. Here we achieve a tight trade-off: exponential
circuit lower bounds on any problem in E suffice to prove BPP=P.
The improvement over previous trade-offs come from a solution to
another problem of independent interest: deterministic amplification
of hardness. This construction turns a hard-on-average Boolean function
on n bits into a function on O(n) bits that is nearly unpredictable.
The second work deals with uniform hardness assumptions. Derandomization
under such assumptions was known only for one-way functions. We
show that at least for derandomizing algorithms in AvRP (namely
probabilistic algorithms that work well for random inputs), it suffices
to assume BPP EXP. The obvious difficulty is that the conversion
of a distinguisher of the Nisan-Wigderson generator (which is the
only known tool that can use hard functions that are not 1-way)
into an efficient algorithm for the hard function it is based on
looks inherently non-uniform. We overcome this difficulty using
a novel bootstrapping strategy.
Both lectures will also be given April 8 and 9, 1998 as part of
the Pacific Institute for the Mathematical Sciences
Lecture Series at the University of British Columbia. Please contact
David Austin (austin@math.ubc.ca) for details