September 11, 2015
2-3 p.m
Room 230
|
Peter Paule
Research Institute for Symbolic Computation (RISC), Johannes Kepler
University Linz, Austria
|
The Concrete Tetrahedron in Algorithmic Combinatorics
Donald Knuth introduced a course "Concrete
Mathematics" that has been taught annually at Stanford University
since 1970. The course, and the accompanying book coauthored with
Ron Graham and Oren Patashnik, was originally intended as an antidote
to "Abstract Mathematics". In the 1990s, Doron Zeilberger's
"holonomic systems approach to special functions identities"
inspired a further wave in this "concrete evolution": the
development of computer algebra methods for symbolic summation, generating
functions, recurrences, and asymptotic estimates. The book "The
Concrete Tetrahedron", by Manuel Kauers and the speaker, describes
basic elements of this tool-box and can be viewed as an algorithmic
supplement to "Concrete Mathematics" by Graham, Knuth, and
Patashnik. The talk introduces to some of these methods. A major application
concerns the computer-assisted evaluation of relativistic Coulomb
integrals, joint work with Christoph Koutschan and Sergei Suslov.
|
September 25, 2015
2-3 p.m
Stewart Library (Room 309)
|
Cordian Reiner
Fields Institute Postdoctoral Visitor
|
Upper bounds for the minimal number of nodes in quadrature rules for
measures with density
It is well-known that for each positive Borel measure
on R^n and each fixed degree d, one can find finitely many nodes in
R^n and corresponding nonnegative weights such that the integral of
a polynomial of degree at most d equals the corresponding weighted
sum of point evaluations. The data consisting of the nodes and corresponding
weights is called a quadrature formula. We interpret the problem of
finding such a formula for a given measure as a polynomial optimization
problem. This allows us to give upper bounds on the number of nodes
needed.
This is joint work with Markus Schweighofer (Konstanz)
|
October 2, 2015
2-3 p.m
Stewart Library (Room 309)
|
Vincent Neiger
PhD student : ENS-Lyon, France
(presently visiting University of Waterloo)
|
Computing minimal interpolation bases
The first step in Guruswami and Sudan's list decoding algorithm for
Reed-Solomon codes amounts to bivariate interpolation with prescribed
multiplicities and degree constraints. Later, variants of this problem
arose in Koetter and Vardy's soft-decision decoding algorithm (where
constraints on the points are weakened), in Guruswami and Rudra's
list decoding algorithm for folded Reed-Solomon codes (where more
variables are involved), and more recently in Devet, Goldberg, and
Heninger's work on Private Information Retrieval.
In quest of fast algorithms, essentially two approaches
have been proposed so far in the literature: one uses fast structured
system solving while the other one uses fast polynomial lattice reduction,
which itself relies on fast order basis computations. In this talk,
we will first introduce the problem, give a quick overview of those
fast algorithms, and discuss possible improvements. Then, noting the
analogy between the quadratic iterative algorithms by Koetter and
by Beckermann and Labahn respectively for constrained multivariate
interpolation and order basis computation, we will present our recent
work on a fast divide-and-conquer algorithm which gives a unified
solution to these two problems.
Joint work with Claude-Pierre Jeannerod, Éric
Schost, and Gilles Villard.
|
October 9, 2015
2-3 p.m
Stewart Library (Room 309)
|
Shaoshi Chen
Fields-Ontario Postdoctoral Visitor
|
Proof of the Wilf-Zeilberger Conjecture
In 1992, Wilf and Zeilberger conjectured that a hypergeometric
term in several discrete and continuous variables is holonomic if
and only if it is proper. Strictly speaking the conjecture does not
hold, but it is true when reformulated properly: Payne proved a piecewise
interpretation in 1997, and independently, Abramov and Petkovsek in
2002 proved a conjugate interpretation. Both results address the pure
discrete case of the conjecture. In this paper we extend their work
to hypergeometric terms in several discrete and continuous variables
and prove the conjugate interpretation of the Wilf-Zeilberger conjecture
in this mixed setting.
This is a joint work with Christoph Koutschan.
|
October 16, 2015
2-3 p.m
Stewart Library (Room 309)
|
Simone Naldi
Institut National des Sciences Appliquées, Toulouse, France
(currently a Postdoctoral Visitor at the Fields Institute)
|
Exact Algorithms for Linear Matrix Inequalities
Semidefinite programming (SDP) is the natural extension
of linear programming to the convex cone of positive semidefinite
matrices. It consists in minimizing a linear function over the convex
set, called spectrahedron, defined by a linear matrix inequality (i.e.
over the set of real vectors x such that a linear matrix A(x) is positive
semidefinite, A(x)>=0). While a floating point approximation of
a solution of a semidefinite program can be computed efficiently by
interior-point algorithms, neither efficient exact algorithms for
SDP are available, nor a complete understanding of its theoretical
complexity has been achieved. I will present an algorithm for deciding
the feasibility of a linear matrix inequality A(x)>=0 in exact
arithmetics. In particular, I will show how to reduce this particular
semialgebraic optimization problem to a (finite) sequence of algebraic
optimization problems preserving the determinantal structure. Remarkably,
the algorithm does not assume the presence of an interior point in
the spectrahedron, and it takes advantage of the existence of low-rank
solutions of the linear matrix inequality. I will finally report on
complexity bounds and on results of practical experiments.
This is joint work with Didier Henrion and Mohab
Safey El Din.
|
October 23, 2015
2-3 p.m
Stewart Library (Room 309)
|
Erich Kaltofen
Mathematics Department
North Carolina State University
|
What Is Hybrid Symbolic-Numeric Computation?
Hybrid symbolic-numeric computation constitutes the
Fifth of my ``Seven Dwarfs'' of Symbolic Computation URL: http://www.math.ncsu.edu/~kaltofen/bibliography/10/Ka10_7dwarfs.pdf
Hybridization requires that the solution of a computational mathematical
problem should utilize both a numeric, floating point arithmetic and
a symbolic, exact arithmetic algorithmic component.
In my talk, I will characterize the hybrid methodology
by surveying some important results and the lessons I have learned
from them. Those include the notion of approximate GCD, approximate
sparse interpolant and the notion of ill-posedness (``unstably boundedness'')
polynomial inequalities, and the analysis of the random distributions
of matrix condition numbers in randomized hybrid algorithms.
This work is in collaboration with Wen-shin Lee,
Zhengfeng Yang, and Lihong Zhi.
|
November 6, 2015
2-3 p.m
Stewart Library (Room 309)
|
Suzy Maddah
University of Limoges and the Fields Institute
|
Software for Linear Differential Systems with Singularities
Differential equations with singularities arise from
countless applications and encompass a vast body of
academic literature. In this talk, we present algorithms and recent
Maple and Mathemagix packages to treat perturbed and unperturbed systems.
-
|
November 13, 2015
2-3 p.m
Stewart Library (Room 309)
|
Cleveland Waddell
NCSU, USA
|
Rational Vector Recovery By Error Correction and
Cabay Termination
We apply the early termination criterion by Stanely
Cabay to the problem of rational vector recovery. We assume the rational
function vector is the solution to a full rank system A(u)x = b(u).
The numerator and denominator of our rational functions are polynomials
in one variable u and the denominator is always the least common monic
denominator. We assume the rational function vector x = f(u)/g(u)
is given by some procedure black box which we can evaluate. We further
assume that no more than a given input bound E of the evaluations
can be erroneous and use error correcting and decoding for removing
the errors.
Cabay's estimates allow us to recover a solution
from fewer evaluations than the general minimum deg(f) + deg(g) +
2E + 1, provided that we never evaluate at poles of the rational function
vector, and then exactly when deg(A) < deg(g). If we allow evaluations
at poles (roots of g) then there are examples where Cabay's estimates
are not sufficient to recover the rational function vector from just
the evaluations. We discuss two ways to fix this provided we can get
some more information from the black box at the poles.
This is joint work with Erich Kaltofen, Clement Pernet
and Arne Storjohann.
|
November 20, 2015
2-3 p.m
Stewart Library (Room 309)
|
Feng Guo
Dalian University of Technology, China
|
Semidefinite programming relaxations for linear
semi-infinite
polynomial programming
In this talk, I will present semidefinite programming (SDP) relaxations
of a class of so-called linear semi-infinite polynomial programming
(LSIPP) problems. It is a subclass of linear semi-infinite programming
problems whose constraint functions are polynomials in parameters
and index sets are basic semialgebraic sets. When the index set of
an LSIPP problem is compact, a convergent hierarchy of SDP relaxations
is constructed under the assumption that the Slater condition and
the Archimedean property hold. When the index set is noncompact, we
use the technique of homogenization to equivalently convert the LSIPP
problem into compact case under some generic assumption. Consequently,
a corresponding hierarchy of SDP relaxations for noncompact LSIPP
problems is obtained. We apply this relaxation approach to the special
LSIPP problem reformulated from a polynomial optimization problem.
A new SDP relaxation method is derived for solving the class of polynomial
optimization problems whose objective polynomials are stably bounded
from below on noncompact feasible sets.
|
November 27, 2015
2-3 p.m
Stewart Library (Room 309)
|
Chu Wang
Chinese Academy of Sciences, China
|
Optimizing a parametric linear function over a
non-compact real
algebraic variety
We consider the problem of optimizing a parametric
linear function over a non-compact real trace of an algebraic set.
Our goalis to compute a representing polynomial which defines a hypersurface
containing the graph of the optimal value function. Rostalski! and
Sturmfels showed that when the algebraic set is irreducible and smooth
with a compact real trace, then the least degree representing polynomial
is given by the defining polynomial of the irreducible hypersurface
dual to the projective closure of the algebraic set.
First, we generalize this approach to non-compact
situations. We prove that the graph of the opposite of the optimal
value function is still contained in the affine cone over a dual variety
similar to the one considered in compact case. In consequence, we
present an algorithm for solving the considered parametric optimization
problem for generic parameters' values. For some special parameters'
values, the representing polynomials of the dual variety can be identically
zero, which give no information on the optimal value. We design a
dedicated algorithm that identifies those regions of the parameters'
space and computes for each of these regions a new polynomial defining
the optimal value over the considered region.
Joint work with Mohab Safey El Din, Feng Guo and
Lihong Zhi
|
December 4, 2015
2-3 p.m
Stewart Library (Room 309)
|
Marie-Francoise Roy
Universite de Rennes 1
|
SLIDES
Various aspects of sign determination
This talk is meant as an introduction to the Workshop
on Algebra, Geometry and Proofs in Symbolic Computation from 7 to
14 December 2015 at the Fields Institute. There are 45 talks, each
50 minutes long in this 9 days workshop so it is really impossible
to give an overview of the topics to be covered.
This is the reason why I present old and recent work
on one of my favourite basic algorithms in real algebraic geometry.
The topic started with Alfred Tarski in the context
of his work on quantifier elimination over the reals but relates to
former work of Sturm and Hermite. And the list of contributors in
the recent decades is quite long.
Sign determination is a basic algorithm deciding
what are the signs taken by a family of polynomials at the roots of
another polynomial. A related topic is the Thom encoding approach
to real algebraic numbers (joint work with Michel Coste from 1988),
since real algebraic numbers can be characterized by the signs they
give to the derivatives of the polynomial defining them.
The sign determination process can be studied from
the complexity point of view, from the formal proof point of view
(see the lecture of Cyril Cohen), for a new different approach to
quantifier elimination (see my lecture, based on joint work with Daniel
Perrucci) and also as a tool to produce the algeraic identities useful
to give a elementary recursive bounds for Hilbert 17 th problem (recent
joint work with Henri Lombardi and Daniel Perrucci).
|