Lecture Abstracts
Short courses (in alphabetical order):
Peter Burgisser - Universitat Paderborn, and Felipe Cucker - City
University of Hong Kong
Condition
1) Condition Numbers
We describe the origin of condition numbers in numerical analysis
and the various ways they occur in both finite-precision analysis
and complexity estimates.
2) Probabilistic Analyses of Condition Numbers
Condition numbers can not be efficiently estimated a priori. To
eliminate their occurrence in complexity estimates one randomizes
data and performs a probabilistic analysis. We describe the notions
of average and smoothed analysis and exemplify them with the classical
condition number of a matrix.
3) On a Problem Posed by Steve Smale
Condition numbers play a central role in computing zeros of complex
polynomial systems ---a task proposed by Steve Smale for the mathematicians
of the 21st century. We describe a randomized continuation algorithm
and analyze its average and smoothed complexity. We also mention
a result for a deterministic algorithm providing a near-solution
to Smale's problem.
_________________________________________________
Jean Pierre Dedieu - Universite de Toulouse
Complexity of Bezout's Theorem and the Condition Number
1. Complexity of path following methods in terms of the condition
number
2. Probability aspects
3. Complexity and the condition metric
Our aim is to establish general principles for solving efficiently
systems of polynomial equations. Very roughly speaking, algorithms
for solving such systems can be divided into two classes: one
algebraic, with methods based on symbolic computation, the other
one based on numerical analysis, and approximation. In these lectures
our interest focuses on this numerical approach.
In the first lecture we describe the general principles of continuation
methods, and their numerical counterparts: the predictor-corrector
algorithms. We estimate the complexity of such methods via the
condition number analysis.
The second lecture is devoted to the probabilistic aspects of
such questions. What can be said for random polynomial systems
? What is the distribution of the condition number ? What is the
complexity of path-following methods on the average ?
In the third lecture we emphasize on a new approach, based on
the condition metric (to be defined), and we list some open problems.
__________________________________________________
Askold Khovanski - University of Toronto
An interplay between Algebraic Geometry and Convex Geometry
1. Intersecion theoy on on complete algebraic varieties
2. Algebraic equations and convex bodies
The famous Bernstein-Kushniremko theorem from the Newton Polyhedra
Theory (usually it is considered as a part of the theory of Toric
Varieties) relates algebraic geometry with the theory of mixed
volumes. Recently Kiumars Kaveh and I have found a far-reaching
generalization of this theorem to generic systems of algebraic
equations on any algebraic variety. In the lectures I will review
these results and their applications to Algebraic and Convex Geometry.
In the first lecture I will discuss a "naive" intersection
theory applicable even to non complete algebraic varieties. This
intersection theory is birationally invariant an enjoys all the
properties of mixed volumes of convex bodies. In the second lecture
I will discuss Newton convex body --- a far-reaching generalization
of Newton polyhedron. Its applications to Algebra include new
theorems about growth of the Hilbert function for wide class of
graded algebras, a new approach to semi-groups of integral points,
a generalization of Fujita approximation theorem, a new version
of Hodge index inequality. Applications to Geometry include a
simple proof of the famous Alexandrov- Fenchel inequality from
the theory of mixed volumes. I will use the Hilbert Theorem on
the degree of projective variety from Algebraic Geometry and the
isoperimetric inequality from Classical Geometry. No other special
knowledge will be needed.
__________________________________________________
Gregoire Lecerf - Universite de Versailles
Symbolic deformation techniques for polynomial system solving
1. Introduction, motivation and first concepts
2. Incremental solving
3. The Kronecker solver
Nowadays polynomial system solvers are involved in sophisticated
computations in algebraic geometry as well as in practical engineering.
The most popular algorithms produce Grobner bases, made of multivariate
polynomials in dense representation. The major drawback of these
bases, but more generally whenever multivariate polynomials coming
from an elimination process are in dense representation, is the
exponential growth of the number of the monomials involved to
represent positive dimensional solution sets. The Kronecker solver
is an alternative approach that uses evaluation data structures
to represent the polynomials as the functions that compute their
values at any given point. These talks are devoted to a self-contained
presentation of this solver. The only prerequisites concern elementary
facts about the Zariski topology, the primary decomposition of
ideals, and the theory of modules over principal rings.
Part 1: introduction, motivation and first concepts.
We will start with an overview of the algorithm with a short history
and motivating examples. Then we will define the dimension of
an algebraically closed set, and describe an efficient probabilistic
algorithm for computing it.
Part 2: incremental solving.
We will see how positive dimensional solution sets can be handled
as if they were zero-dimensional. This leads to an efficient representation
in terms of birational maps to hypersurfaces, and then to a constructive
definition of the degree of an algebraic variety. In case of a
reduced and regular sequence of polynomials we will be able to
analyze in details the way it can be solved incrementally.
Part 3: the Kronecker solver.
The last talk will start with the algorithmic details of the solver
in the regular case, with the cost analysis. We shall then present
extensions regarding to the structure of the multiplicities and
to thecomputation of the equidimensional decomposition in the
general case.
We will finish with implementation issues and open problems.
References:
C. Durvye and G. Lecerf. A concise proof of the Kronecker polynomial
system solver from scratch, Expositiones Mathematicae, 26(2),
2008.
M. Giusti, G. Lecerf, and B. Salvy. A Grobner free alternative
for polynomial system solving. Journal of Complexity, 17(1), 2001.
_________________________________________________
Renato Monteiro - GeorgiaTech
Algorithms for large scale structured optimization problems
1. Overview of algorithms for large scale structured optimization
problems.
2. Interior-point algorithms for cone programming and generalizations
3. First-order methods for large scale structured optimization
problems.
In this series of lectures, we will discuss efficient algorithms
for solving large scale structured optimization problems. More
specifically, we will first discuss interior-point algorithms
and specific classes of convex optimization problems that can
be solved by these methods. These methods are based on variants
of Newton method and possess the theoretically nice property that
they converge in polynomial time. However, due to its second-order
nature, their iteration are not cheap and these methods can only
solve small-to-medium size convex optimization problems. We next
turn our attention to first-order methods which are based on variants
of the (projected) gradient methods applied to suitable reformulations
of the original convex optimization problem. Compared to the above
second-order methods, first-order algorithms tend to perform a
large number of iterations and are able to obtain only moderately
accurate solutions but have the advantage that their iteration
is substantially cheaper. For these reasons, the first-order methods
seem to be the only alternative for solving large scale convex
optimization problems. We will present several first-order variants
and discuss their iteration complexity bounds. We also discuss
variants which have performed well in practice but for which no
complexity bound has been developed so far.
======================
Talks (in alphabetical order)
Saugata Basu - Purdue University
Polynomial hierarchy, Betti numbers and a real analogue of Toda's
theorem
Toda proved in 1989 that the (discrete) polynomial time hierarchy,
$\mathbf{PH}$, is contained in the class $\mathbf{P}^{\#\mathbf{P}}$,
namely the class of languages that can be decided by a Turing
machine in polynomial time given access to an oracle with the
power to compute a function in the counting complexity class $\#\mathbf{P}$.
This result which illustrates the power of counting is considered
to be a seminal result in computational complexity theory. An
analogous result in the complexity theory over the reals (in the
sense of Blum-Shub-Smale real Turing machines) has been missing
so far. We formulate and prove a real analogue of Toda's theorem.
Unlike Toda's proof in the discrete case, which relied on sophisticated
combinatorial arguments, our proof is topological in nature.
(Joint work with Thierry Zell).
_______________________________________________
Carlos Beltran - Universidad de Cantabria
Path-following methods for solving Smale's 17th problem. Recent
progress and open questions
Path-following methods are the most used and successful numerical
tools for solving systems of polynomial equations. They are interesting
from the point of view of Complexity of Numerical Analysis, providing
a probabilistic answer to Smale's 17th problem ``Can a zero of
n complex polynomial equations in n unknowns be found approximately,
on the average, in polynomial time with a uniform algorithm?''.
The study of the complexity of path-following methods leads to
a number of challenges in fields as Geometric Measure Theory and
non--smooth Analysis. In this talk I will present some recent
progress on this topic and related open questions.
______________________________________________
Alexandre D'Aspremont - Princeton University
Tractable performance bounds for compressed sensing
Recent results in compressed sensing show that, under certain
conditions, the sparsest solution to an underdetermined set of
linear equations can be recovered by solving a linear program.
These results either rely on computing sparse eigenvalues of the
design matrix or on properties of its nullspace. So far, no exact
tractable algorithm is known to test these conditions and most
current results rely on asymptotic properties of sparse eigenvalues
of random matrices. Given a matrix A, we use semidefinite relaxation
techniques to test the nullspace property on A and show on some
numerical examples that these relaxation bounds can prove perfect
recovery of sparse solutions with relatively high cardinality.
_______________________________________________
Marc Giusti - Ecole Polytechnique Palaiseau
On the geometry of polar varieties
(Joint work with Bernd Bank, Joos Heintz, Mohab Safey El Din
and Eric Schost)
The problem addressed here is real root finding in algebraic varieties,
with intrinsic complexity bounds. The results exposed form the
geometrical main ingredients for the computational treatment of
singular hypersurfaces. In particular,
- we show the non-emptiness of suitable fully generic dual polar
varieties of (possibly singular) real varieties
- we show that fully generic polar varieties may become singular
at smooth points of the original variety, and give a sufficient
criterion when this is not the case
- we give an intrinsic degree estimate for polar varieties and
introduce the new concept of a sufficiently generic polar variety.
Our statements are illustrated by examples and a computer experiment.
_______________________________________________
Bill Helton - UC San Diego
Convex Matrix Inequalities vs Linear Matrix Inequalities (slides)
When is semidefinite programming applicable? Namely, which problems
can be expressed as a linear matrix inequality (LMI). Any such
problem is necessarily convex, so the issue is: How much more
restricted are LMIs than Convex Matrix Inequalities?
There are several main branches of this pursuit. First there
are two fundamentally different classes of linear systems problems.
Ones whose statements do depend on the dimension of the system
"explicitly" and ones whose statements do not. Dimension
dependent systems problems lead to traditional semialgebraic geometry.
problems, while dimensionless systems problems lead directly to
a new area which might be called noncommutative semialgebraic
geometry. The classic results of control lead to what we are calling
noncommutative problems.
A completely different tool is that of lifting a given convex
set to one which has an LMI representation.
In this talk after laying out the distinctions above we give
results and conjectures on the answer to the LMI vs convexity
question.
________________________________________________
Myong-Hi Kim (SUNY at Old Westbury)
The average cost of one variable root finding polynomial
We analyze a path-lifting algorithm for finding an approximate
zero of a complex polynomial, and show that for any polynomial
with distinct roots in the unit disk, the average number of iterates
this algorithm requires is universally bounded by a constant times
the log of the condition number. In particular, this bound is
independent of the degree d of the polynomial. The average is
taken over initial values z with |z| = 1+1/d using uniform measure.
________________________________________________
Bernard Mourrain - INRIA Sophia Antipolis
Isolation of real roots of polynomial systems, complexity and
condition number
The presentation is about subdivision algorithms for isolating
the real roots of a system of multivariate polynomials. We shall
describe two methods using either the Bernstein basis representation
or the monomial basis representation. We connect these methods
with continued fraction approximation of the coordinates of the
real roots. The representation of the subdivided domains is done
through homographies, which allows us to use only integer arithmetic
and to treat efficiently unbounded regions. We use univariate
bounding functions, projection and preconditioning techniques
to reduce the domain of search. The resulting boxes have optimized
rational coordinates,
corresponding to the first terms of the continued fraction expansion
of the real roots.
We analyse the complexity of these methods in terms of geometric
quantities associated with the roots of the system, such as their
condition number. An extension of Vincent's theorem to multivariate
polynomials is used for the termination of the algorithm. This
yields new complexity bounds for real root isolation.
Joint work with A. Mantzaflaris and E. Tsigaridas.
________________________________________________
Marie-Francoise Roy - Universite de Rennes 1
Certificates of positivity in the Bernstein basis
Certificates of positivity (on the positive orthant, or on the
simplex) are based on results by Polya and Bernstein. Bernstein
polynomials, for example, are positive on the simplex, and a positive
combination of them is positive as well. A partial reciprocal
is true, by increasing the degree of the Bernstein, or by subdividing
the simplex. This gives two different kinds of certificates, the
approach by subdivision being much more efficient.
These certificates are not uniform, since the size of the certificate
output depends on the bitsize of the entries.
_______________________________________________
Andrew Sommese - University of Notre Dame
Zebra Fish, Tumor Growth, and Algebraic Geometry
Problems of central importance in engineering and science often
lead to systems of partial differential equations, for which the
only hope of solution is to compute numerical solutions. Often
the systems are intrinsically nonlinear with several solutions
corresponding to the same set of physical conditions. Discretizations
of such systems of differential equations often lead to large
systems of polynomial equations whose solutions correspond to
potential solutions of the system of differential equations. These
naturally arising polynomial systems are well beyond the pale
of systems previously investigated in numerical algebraic geometry.
This talk will describe some of the recent work of Wenrui Hao,
Jonathan Hauenstein, Bei Hu, Yuan Liu, Yong-Tao Zhang, and myself
in successfully solving such systems. First I will give an overview
of homotopy continuation methods and what they allow us to compute.
Second, I will discuss two applications to numerical solution
of partial differential equations. The first is our work on finding
steady state solutions of a reaction-diffusion model on zebrafish
dorsal-ventral patterning. Here, using a new approach we found
seven solutions with the boundary conditions, of which three are
stable: only stable solutions have meaning in the physical world.
The second is our work on the solution of several models for tumor
growth. Here the boundary of the tumor is the most important part
of the solution: the goal is to solve this free boundary problem
as mu, a parameter of the model called the "tumor aggressiveness
factor," varies. There is a family of easily computed radially
symmetric solutions, which, for certain discrete values of mu,
meets branches of solutions that are not radially symmetric away
from the point where the branches meet. The problem is to compute
the solutions on the nonradial branch. There are no standard ways
of solving this sort of free boundary problem for any but very
small distances from the radial solutions. Our successful solution
of this problem required a new approach that came to grips with
some of the numerical algebraic geometry underlying systems of
several thousand polynomials in a like number of variables. Finally,
I will also discuss the direction this research is going.
__________________________________________
Paul Tseng - Washington University
Problem reformulations and first-order gradient methods for stuctured
convex optimization
We study how problem reformulations, say, using duality or regularization,
affect the computational complexity of first-order gradient methods
for large-scale structured convex optimization. Relevant applications
include image denoising and multi-task learning.
__________________________________________
Joris van der Hoeven - Universit'e Paris-Sud
On the art of guessing
Current computer algebra systems usually work in a shell mode:
the user asks a question and the system hopefully gives an answer.
To what extent would it be possible to discover additional mathematical
structure in the s problem in an automatic fashion ? For instance,
if we encounter 2.094395102393195492308428922 in the output of
a numerical computation, the system might suggest its replacement
by 2/3. Even though guessing such additonal relations was not
explicitly requested by the user, it may lead to interesting insights
and does not necessarily lead to a big increase of the overall
computation time. In our talk, we will address several classical
guessing algorithms and present a few new ones. We will start
with rational number and function recovery, the LLL-algorithm
and Pade-Hermite approximation. We will next turn our attention
to guessing possible asymptotic expansions for sequences and possible
relations at singularities of analytic functions. If time permits
it, we will also discuss some guessing techniques for symbolic
expressions.
_________________________________________
Yinyu Ye - Stanford University
On the complexity of L-p norm minimization for p less than 1
Recently, L-p norm minimization for p less than 1 is used to
compute a sparse solution for a system of linear equations. We
show that the problem is NP-hard for any 0<p<1. On the positive
side, we show that the interior-point affine scaling algorithm
works well to compute a local minimizer. Computational results
indicate that the model produces solutions sparser than that from
the L-1 minimization model.
Back to top