Super-linear time-space tradeoff lower bounds for randomized computation

Paul Beame, University of Washington

 

(Joint work with Mike Saks, Xiaodong Sun, and Erik Vee)

 

We prove the first time-space lower bound tradeoffs for randomized computation of decision problems where computation error is allowed.  Our techniques are extensions of those use by Ajtai (STOC 99, FOCS 99) in his time-space tradeoffs for zero-error deterministic RAM algorithms computing element distinctness and for Boolean branching programs computing a natural quadratic form.

 

Ajtai's bounds were of the following form: if the machine uses at most kn time for some constant k, it requires space at least epsilon_k n for some contant epsilon_k.  We provide explicit relationships between k and epsilon_k which improve the relationships implicit in Ajtai's arguments.  In particular we obtain non-trivial time-space tradeoffs for substantially superlinear time  bounds, $T=\Omega(n\sqrt{\log/\loglog (n/S)})$.

 

 

 

 

Diagonalizing NC along the delta axis

Steve Bellantoni, Scotia Capital Inc.

 

(Joint work with Isabel Oitavem)

 

The ``$\delta$ axis'' is defined by certain classes $\N^k$ such that uniform $\NC = \union_k \N^k$. Then a diagonalization is used to prove that $\N^k \not= \N^{k+1}$ for all $k \ge 0$.

This separation of $\NC$ along the $\delta$ axis does not imply $\NC^k \not= \NC^{k+1}$, because the relationship between $\NC^k$ and $\N^k$ is not known. However, it is argued that

the $\delta$ axis provides a more natural subdivision of $\NC$ than the traditional hierarchy $\NC^k$. To support this, a careful analogy with the classes $\dtime(n^k)$ is made, illuminating a surprisingly close relationship between $\ptime$ and $\NC$. The definition of $\N^k$ uses ramified constructs equivalent to a combination of parallel time and space. 

 

 

 

 

Theorem Proving for Software Analysis

Dan Brand, T.J.  Watson Research Centre, IBM

 

Theorem proving is used in software analysis mainly to identify those paths that are actually feasible during execution. As it is used in the inner loop of the analysis, its efficiency is essential.

The talk will discuss a tool for discovering errors of implementation and efficiency. The main claim is that theorem proving cannot be separate from the rest of the analysis, but must be incorporated into the whole process.

 

 

 

 

 

 

 

 

 

Ordinal Notations and Well-Orderings in Bounded Arithmetic

Sam Buss, University of California - San Diego

 

(Joint work with A. Beckmann and C. Pollett)

 

We discuss how fragments of bounded arithmetic can prove the well-foundedness of ordinal notations on a bounded domain. The theory T^2_2 is always enough.  S^1_2 (Or Cook's theory PV) is not unless the polynomial time hierachy collapses.  The main new  result is that T^1_2 is strong enough to handle standard ordinal notations based on Cantor normal form and on the Veblen phi function. 

 

 

 

 

Which Problems Have Strongly Exponential Complexity?

Russell Impagliazzo, University of California, San Diego

 

(Joint work with Ramamohan Paturi, Avi Wigderson, and Francis Zane)

 

The theory of \NP-completeness, initiated by Cook and Levin, gives a complexity-theoretic reason to believe that many natural optimization problems, such as Satisfiability and Clique, require time exponential in some root of the input size.  All such \NP-complete problems are equivalent in this regard.  However, not all \NP-complete problems have the same time complexity, or require "exhaustive search" to solve. 

 

For example, Hamiltonian circuit and chromatic number can both be solved in $2^{O( n)}$ time, whereas exhaustive search would be $2^{\Omega (n \log n)}$.  For other \NP-complete problems, such as $k$-SAT, independent set, and $k$-colorability, there have been a progression of better but still exponential algorithms. It is not a priori clear where the limits of these progressions are, or if there is any connection between them.

 

This talk gives a survey of recent work by the speaker and others to give a complexity-theoretic approach to the issue of the various time complexities of the \NP-complete problems. One such result is that many of the standard \NP-complete problems, e.g.,  $k$-SAT,$k$-colorability,

$k$-Set Cover, Independent Set, Clique, and Vertex Cover, are equivalent as far as having sub-exponential ($2^{o(n)}$) time algorithms, and have such algorithms if and only if all problems in the class \SNP do. Using similar techniques, we show that if $3$-SAT requires exponential time, then each $k$-SAT is strictly easier than $l$-SAT for some $l >k$. 

 

Finally, we discuss some recent "convexity" results, where we measure the complexity of problems  in terms of the "dimension" of inputs, rather than the input length. We show that if problems such as independent set and Machine-Satisfiability require strictly exponential time, then the complexities of the bounded dimension cases approach a limit which determines the

constant in the exponent. In particular, if the complexity is sub-exponential for infinitely

many dimensions, then it is always sub-exponential. 

 

This research is supported by NSF grant CCR-9734911 from Theory of Computing Program,

Ramamohan Paturi$^*$ University of California, San Diego, Francis Zane, Bell Laboratories, and Lucent Technologies.

 

 

 

 

 

 

Circuit Minimization Problem

Valentine Kabanets, University of Toronto

 

(Joint work with Jin-Yi Cai)

 

We study the complexity of the circuit minimization problem: given the truth table of a Boolean function $f$ and a parameter $s$, decide whether $f$ can be realized by a Boolean circuit of size at most $s$. We argue why this problem is unlikely to be in $\P$ (or even in $\P/\poly$) by giving a number of surprising consequences of such an assumption. We also argue that proving this problem to be $\NP$-complete (if it is indeed true) would imply proving strong circuit lower bounds for the class $\E$, which appears beyond the currently known techniques.

 

 

 

 

An Induction Principle Sound for Computational Indistinguishability

Bruce Kapron, University of Victoria

 

We define a relation on probabilistic polynomial time function ensembles which is a parameterized version of a standard notion of computational indistinguishability. We show that this relation satisfies an induction principle which is closely related to the induction rule found in Cook's system PV. The soundness proof uses a standard ``hybrid argument'' approach. We then present a simple example showing that this induction principle can be used to prove the correctness of a construction for ``stretching'' the output of a pseudo-random sequence generator. Finally we consider the development and application of formal systems which include this sort of

induction.

 

 

 

 

Tautologies from Pseudo-Random Generators, AC^0 Cardinality, and other Bounded Arithmetic

Jan Krajicek, Academy of Sciences, Prague

 

I shall talk about a couple of problems in pure complexity theory inspired by considerations in

bounded arithmetic and model theory. The first is about relations of the WPHP and its dual form

to various cryptographic primitives (one-way functions, families of hash functions, pseudo-random number

generators). In particular, the relation to pseudo-random generators suggests a new kind of tautologies that

can be viable candidates as hard for Extended Frege system. I shall explain the view of these tautologies

from [2] (see also [3] for as logic-free explanation as possible). Same formulas were recently considered

in [1].

 

I use the phrase "effective cardinality" as an acronym for all versions of the following general question:

Given two sets A, B defined in some particular way, is there an injective map of one into the other one (or a

surjective map from one onto the other one) that is definable (its graph) as effectively as are A and B?

 

This problem was formulated in [1], as well as its specialization to Boolean complexity: If A, B are sets

of binary strings (of length n) defined by circuits of size S, is there an injection of one into the other one

that is definable by a circuit of size polynomial in S? This is related to counting and based on Toda's theorems one can give a conditional negative answer.

 

The problem similarly specialized to the class AC^0 ought to have an unconditional solution. I shall discuss

this case and give concrete combinatorial examples of sets A and B for which such maps should not exists.

 

If time permits I shall discuss "structured PHP" defined in [2]; a version of PHP in which the underlying sets have some structure.

 

 [1] M.Alekhnovich, E.Ben-Sasson, A.A.Razborov, and

     A.Wigderson: Pseudorandom generators in propositional

     proof complexity, preprint, (March 2000).

 

[2] J.Krajicek: On the weak pigeonhole principle,

     preprint, (August 1999).

 

 [4] J.Krajicek: Tautologies from pseudo-random generators,

     preprint to be available at my web page during

     May 2000.

 

 [4] J.Krajicek, T.Scanlon: Combinatorics with definable

     sets: Euler characteristics and Grothendieck rings,

     preprint, (February 2000).

 

 

 

 

 

 Resettable Zero Knowledge

Silvio Micali, Massachussetts Institute of Technology

 

(Joint work with Ran Canetti, Oded Goldreich and Shafi Goldwasser)

 

We put forward the notion of resettable zero knowledge: a new security measure for cryptographic protocols which greatly strenthens the classical notion. In essence, a RZK protocol is one which remains zero knowledge even if a malicious Verifier can intercat with the Prover many times, each time forcing him to execute starting from the same state and random tape.

 

We show that under some general complexity assumption NP has RZK proofs.

 

 

 

 

Quantam Circuits

Nicholas Pippenger, University of British Columbia

 

This talk will survey results on the construction of fault-tolerant quantum circuits.  The situation for classical circuits is well understood:  fault-tolerance requires increasing size by a logarithmic factor and increasing the depth by a constant factor.  For quantum circuits, the best methods known require multiple additional logarithmic factors in both size and depth. We show that in any case quantum fault-tolerance requires at least an additional logarithmic factor in size, and we discuss the prospects for attaining this goal.

 

 

 

 

 

 

 

 

A New Proof of the Weak Pigeonhole Principle

Toniann Pitassi, University of Arizona

 

(Joint work with Alexis Maciel and Alan Woods)

 

The exact complexity of the weak pigeonhole principle is an old and fundamental problem in proof complexity. It is used in all areas of mathematics, and has been the primary hard example underlying lower bounds in propositional proof complexity.

 

Still, the exact complexity of the pigeonhole principle is not known, and is linked to several important problems in proof complexity and circuit complexity. Using a clever diagonalization argument, Paris, Wilkie and Woods present quasipolynomial-size bounded-depth proofs of the weak pigeonhole principle. Their argument was further refined by Krajicek. In this talk, we present a new proof: we show that the weak pigeonhole principle has quasipolynomial-size proofs where every formula consists of a single AND/OR of polylog fanin. Our proof is conceptually simpler than previous arguments and is optimal with respect to depth.

 

 

 

 

A High Level Summary Of Approaches for  The P Versus NP Question

Steven Rudich, Carnegie Mellon University

 

This talk will give a bird's eye view of 4 open ended questions related to solving P versus NP. What is the status of diagonalization? What use is uniformity in a computational model? What is a non-natural proof? What is the most conservative extension of PSPACE which can be separated from P?

 

 

 

 

Time-Space Tradeoffs for SAT on Non-Uniform Machines

Iannis Tourlakis, University of Toronto

 

Recent time-space tradeoffs for SAT are generalized and combined with a new argument for diagonalizing over machines taking $n$ bits of advice on inputs of length $n$ to obtain the first time-space lower bounds for $\sat$ on non-uniform machines. In particular, we show that for any

$a<\sqrt{2}$, $\sat$ cannot be computed by a random-access deterministic Turing machine using $n^a$ time, $n^{o(1)}$ space and $n^{o(1)}$ advice, and that for any $\epsilon>0$, $\sat$ cannot be computed by a random-access deterministic Turing machine using $n^{1+o(1)}$ time,

$n^{1-\epsilon}$ space and $n^{o(1)}$ advice. More generally, we show that for all $\delta>0$ and any $\epsilon$, $0<\epsilon<1$, $\sat$ cannot be solved by a random-access deterministic Turing machine using $n^{\frac12(\sqrt{\epsilon^2+8}-\epsilon)-\delta}$ time, $n^\epsilon$

space and $n^{o(1)}$ advice.

 

Additionally, new separations of uniform classes are obtained. We show that for all $\epsilon>0$ and all rationals $r\geq 1$, $\DTISP(n^r,n^{1-\epsilon})\subsetneqq\NTIME(n^r)$.

 

 

 

 

 

 

 

Robust Logic

Leslie G Valiant, Harvard University

 

 It has been recognized for centuries that cognitive phenomena exhibit both inductive as well as deductive  aspects. The processes of induction and deduction have been studied systematically  though separately in the frameworks of computational learning and computational logic. Since cognitive computations appear to perform these processes in combination, a single framework is required within which the two can be discussed simultaneously. Robust logics are designed to serve just that purpose. They are based on the view that a knowledge-base can be made robust only if each assertion in it is verifiable empirically against and learnable from real world observations. The challenge then is to reconcile this with the advantages offered by conventional logics, in particular a sound basis for deduction. Robust logics are designed to bridge this gap while retaining computational feasibility. In this framework both the computational work as well as the accuracy of both learning and deduction are polynomially controlled.

 

 

 

 

Pseudorandom Generators in Propositional Proof Complexity

Avi Wigderson, IAS and The Hebrew University

 

(Joint work with Alekhnovich, Ben-Sasson and Razborov)

 

We call a pseudorandom generator G (stretching n bits to m>n bits) hard for a propositional proof system $P$ if $P$ can not efficiently prove the (properly encoded) statement G(x) != b  for

any m-bit string b. We consider a variety of ``combinatorial'' pseudorandom generators inspired by the Nisan-Wigderson generator on the one hand, and by the construction of Tseitin tautologies on the other. We prove that under certain circumstances these generators are hard for such proof systems as Resolution, Polynomial Calculus and Polynomial Calculus with Resolution (PCR). We speculate that they are should be hard for Frege and Extended Frege as well.

 

 

 

 

A Computational Approach Toward Realistic Quantum Logic 

Tomoyuki Yamakami, S.I.T.E., University of Ottawa

 

(Joint work with a SITE student, Fida Dankar)

 

Quantum logic has a longer history than quantum computation theory. Since the later theory, however, has made a significant progress during the past decade, I believe that it is time to think of a "computational" side of quantum logic. In this workshop talk, I will report an ongoing research on how to characterize a quantum computation in terms of propositional logic.

 

We attempt to define a Gentzen-style, sequent calculus of prepositional quantum logic. Our approach is different in nature from the standard one in quantum logic. We abandon the idea of considering a prepositional formula to be defined from propositional variables with a systematic application of (standard) logical connectives. We rather introduce constants that, in conjunction with propositional variables, express amplitudes used in quantum computations. 

 

Propositional variables are to correspond to binary inputs fed to a quantum circuit  so that a quantum computation can be directly translated into a series of applications of inference rules, starting with initial sequents. We introduce a finite number of (unconventional) inference rules based on the fact that a quantum circuit can be built from a finite set of unitary gates. However, each quformula (quantum formula) becomes large because it comprises all possible quantum states.

 

There are several important issues to be settled: how we interpret "measurement" in our propositional calculus?  What is a best set of inference rules and of quantum-logical connectives? How do we formulate an approximation scheme so that we can discuss the closeness of two computations in our system?  I hope that our challenge would initiate a wide range of study that bridges quantum computations and quantum logic.