Overview
This workshop explores explores the interactions between two classical areas
in symbolic and numeric computation.
Algorithms for Symbolic Linear Algebra. While constituting
a classical researchfield in mathematical computing, algorithms for linear
algebra problems is still very much a current, vibrant and highly applicable
research topic. Research into exact, symbolic, and numeric methods with
floating point arithmetic is deeply intertwined, in part because the manipulation
of sparse matrices or the location of errors in the inputs can involve discrete
combinatorial approaches. Our vision for the workshop foresees focus on
several current research topics:
1.The computational complexity of linear algebra problems is classical,
as the matrix multiplication complexity remains open. However, new important
recent results on the problem for multiplication [Le Gall, Williams] and
essentially linear timealgorithms for Laplacians of graphs [Lau, Miller,
Peng, Spielman, Yuster] and certification of the outputs [Dumas, Kaltofen,
Thaler] form a day's topic of our workshop.
2. Errors in inputs have received attention as large data sets require
clean-upbefore solving: outlier-sensitive least squares fitting [Ipsen,
Mahoney] and removal of incorrect constraints [Candes] and points in interpolation
problems [Kaltofen, Yang] form the subject of a half-day.
3. Diophantine linear algebra is a rich area of current research, including
lattice basis reduction algorithms [Stehle], Hermite and Smith normal
forms [Giesbrecht, Storjohann] and discrete optimization [Cook]. We dedicate
a half-day of our workshop to diophantine methods.
4. Exact algorithms for dense and sparse linear algebra and linear programming
problems, over exactfields and rings, are the back-bone of symbolic computation
software. Two days are dedicated to symbolic algorithms [Albrecht, Bard,
Dumas, Giorgi, Labahn, Murri,Steffy, Pernet, Sultan, Vialla]. We shall
account for new results discovered in this rich area of research between
now and the program, and we plan to add a new topic and speakers to our
workshop accordingly.
Symbolic-Numeric Computation. Algorithms that combine techniques
from symbolic and numeric computation have been of increasing importance
and interest over the past decade. The necessity to work reliably with imprecise
and noisy data, and for speed and accuracy within algebraic and hybrid-numerical
problems, has encouraged new interactions between the numerical and symbolic
computing fields. An example of a typical problem is that of approximate
polynomial system solving: given a set of polynomials that have no common
real or complex root, deform the scalars in the polynomials minimally to
have common roots. For two univariate polynomials the problem specializes
to the approximate GCD problem. For linear polynomials the problem is total
least squares fitting.
Problems such as those have imported ideas from numerical analysis into
computer algebra and have had applications in numerous areas from motion
planning to image deblurring. Current topics include certified solution
of polynomial systems, optimizing functions over varieties, numerical system
solving with parametric entries, nearest polynomials with given properties,
as well as improved algorithms for earlier problems such as approximate
polynomial gcd, factorization, sparse interpolation, etc. The goal of this
workshop is to support the interaction and integration of symbolic and numeric
computing. Several international workshops have been organized in this area
in recent years, including the Fields Institute Workshop on Hybrid Methodologies
for Symbolic-Numeric Computation (Waterloo, Canada, November 2011) and most
recently SNC 2014 held as a satellite meeting of the ICM.
Schedule
Monday,
October 26 |
8:15-8:45
|
On site registration and morning coffee |
8:45-9:00
|
Welcoming remarks |
9:00-9:50
|
Francois Le Gall, The University
of Tokyo
Overview of the recent progress on matrix multiplication |
9:55-10:45
|
Clément Pernet, École
normale supérieure de Lyon
High performance dense linear algebra: arithmetic,
algorithms and their parallelization |
10:45-11:10
|
Coffee break |
11:10-12:00
|
Justin Thaler,Yahoo! Labs
A Crash Course on Fast Interactive Proofs |
12:0-13:45
|
Lunch break |
13:45-14:35
|
Jean-Guillaume Dumas (to be confirmed), Université
Joseph Fourier
Essentially Optimal Certificates in Linear Algebra
|
14:40-15:30
|
Bernhard Beckermann, Université des Sciences et Technologies
de Lille
On rational functions without spurious
poles
|
15:30-16:00
|
Coffee break |
16:00-16:50
|
George Labahn, University of Waterloo
Order bases, kernel bases and their use in fast
polynomial matrix arithmetic |
17:00
|
Reception at Fields
|
Tuesday,
October 27 |
9:00-9:50
|
Riccardo Murri,
University of Zurich
The "Rheinfall" parallel algorithm for
Gaussian Elimination |
9:55-10:45
|
Jeremy Johnson,
Drexel University
The Probability that Block Wiedemann Correctly Computes Invariant
factors |
10:45-11:10
|
Coffee Break
|
11:10-12:00
|
Mark Giesbrecht,
University of Waterloo
Quasideterminants, Degree Bounds and Algorithms
for Matrices of Differential and Difference Polynomials |
12:00-13:45
|
Lunch |
13:45-14:35
|
Daniel Steffy, Oakland University
Symbolic-numeric computation of cutting planes
for integer programming |
14:40-15:30
|
Lap Chi Lau, University
of Waterloo
Fast matrix rank algorithms and applications |
15:30-16:00
|
Coffee Break
|
Wednesday,
October 28 |
9:00-9:50
|
Gary Miller, Carnegie
Mellon University
Solving Large Optimization Problems using Spectral
Graph Theory |
9:55-10:45
|
Vincent Neiger, University
of Waterloo
Fast computation of minimal interpolation bases
with arbitrary weights |
11:10-12:00
|
Coffee Break |
11:10-12:00
|
Ziad Sultan, LJK/IMAG
Adaptive and generic parallel //computation of
echelon forms |
12:00-13:45
|
Lunch |
13:45-14:35
|
Gregory Bard, University of Wisconsin,
Stout
Exploring the Extended Linearization Algorithm (or XL Algorithm)
for Solving Polynomial Systems of Equations, with Remarks toward NP-Completeness |
14:40-15:30
|
Arne Storjohann, University
of Waterloo
Some recent techniques for faster linear algebra |
15:30-16:00
|
Coffee Break
|
16:00-17:00
|
Coxeter Lecture Series
Victor Shoup, Courant Institute
Computing on encrypted data and fast polynomial arithmetic |
17:00
|
Coxeter Lecture Series
Reception |
Thursday,
October 29 |
9:00-9:50
|
Anton Leykin, Georgia
Institute of Technology
Toward a numerical description of a complex affine
scheme |
9:55-10:45
|
Ioannis Emiris,
University of Athens
Sparse implicitization by interpolation matrices |
10:45-11:10
|
Coffee Break |
11:10-12:00
|
Daniel Lichtblau,Wolfram
Research, Inc.
Issues in Blind Deconvolution via Computer
Algebra (A Tale from the Dark Side) |
12:00-13:45
|
Lunch |
13:45-14:35
|
Didier Henrion,
LAAS-CNRS, Czech Technical University in Prague
SPECTRA: solving exactly linear matrix inequalities |
14:40-15:30
|
Nan Li, Tianjin
University
Verified Solutions of Polynomial Systems |
15:30-16:00
|
Coffee Break |
16:00-17:00
|
Coxeter Lecture Series
Victor Shoup, Courant Institute
Fully homomorphic encryption: computing on encrypted data with
helib |
Friday,
October 30 |
9:00-9:50
|
Ilias Kotsireas,
Wilfrid Laurier University
Birds and Frogs: polynomial and matrix questions
from design theory |
9:55-10:45
|
Rob Corless, Western
University
Optimal Backward Error and the Leaky Bucket |
10:45-11:10
|
Coffee Break |
11:10-12:00
|
Mohab Safey El Din,
University Pierre & Marie Curie
Bit complexity for quadratic minimization problems:
the generic case |
12:00-13:45
|
Lunch |
13:45-14:35
|
Zhengfeng Yang,
East China Normal University
Error-correcting sparse interpolation of multivariate
function |
14:40-15:30
|
Joseph Haraldson,
University of Waterloo
Computing Approximate GCRDs of Differential
Operators |
15:30-16:00
|
Coffee Break |
16:00-17:00
|
Coxeter Lecture Series
Victor Shoup, Courant Institute
NTL: a library for doing number theory |
Saturday,
October 31 |
9:00-9:50
|
Wen-shin Lee, University
of Antwerp
How sub-sampling leads to more robustness and higher
resolution in digital signal processing |
9:55-10:45
|
Michael Sagraloff,
Max Planck Institute for Informatics
On Recent Progress in Polynomial Root Finding |
10:45-11:10
|
Coffee Break |
11:10-12:00
|
Daniel Roche, U.S.
Naval Academy
Sparse floats |
12:00-13:45
|
Lunch |
Abstracts
Bernhard Beckermann, Université des
Sciences et Technologies de Lille
On rational functions without spurious poles
To our knowledge, only few results are known about numerical analysis with
rational functions, for instance how to represent rational functions. Even
for the simple question of how to mesure the distance between two rational
functions there are only few answers in the litterature. However, for some
applications like Padé approximants one desires rational functions
without spurious poles, that is, no small residuals in a partial fraction
decomposition, or no Froissart doublet of a close-by pole and zero, see for
instance the recent papers (Gonnet, Guettel & Trefethen, 2013) and (Beckermann
& Matos, 2015). We will point out the link with numerical GCD computations,
see for instance (Beckermann, Labahn & 1998), and the singular values
of underlying structured matrices (Corless, Gianni, Trager, Watt, 1995). It
also turns out that the spherical derivative of the rational function can
be a useful indicator. Joint work with Ana Matos and George Labahn.
Ioannis Emiris, University of Athens
Sparse implicitization by interpolation matrices
Given a parametric variety of codimension 1, we compute its implicit equation
by interpolating a polynomial through a sufficiently large number of randomly
sampled points. This problem, in exact or approximate form, can be reduced
to computing the nullspace of a numeric matrix, even in the case of base points.
This interpolation matrix offers an efficient representation of the hypersurface:
we formulate basic geometric predicates, namely membership, sidedness as well
as ray shooting, as rank computations on this matrix.
The theory of toric (sparse) elimination allows us to construct interpolation
matrices that capture the structure of the parametric expressions and of the
implicit polynomial. The method constructs the Newton polytope of the toric
resultant to predict that of the implicit polynomial. Since the true implicit
polytope may be a Minkowski summand of the predicted one, we employ, in 2D
and 3D, a worst-case optimal algorithm for Minkowski decomposition. If the
interpolation matrix still has corank larger than one, we apply numerical
strategies to compute the unique implicit polynomial. Our methods are implemented
in Maple, and also Matlab and Sage. The talk concludes with a discussion on
experiments.
Joint work with T. Kalinka, C. Konaxis, T. Luu Ba, Z. Zafeirakopoulos.
Mark Giesbrecht, University of Waterloo
Quasideterminants, Degree Bounds and Algorithms for Matrices of Differential
and Difference Polynomials
We look at the problem of computational linear algebra over rings of differential
and difference operators, as captured by the Ore polynomials. Many algorithms
for computing normal forms of matrices of Ore polynomials have been proposed
over the past few years. Examples of such computations include the Hermite
(triangular) form, the Jacobson (diagonal) form and the Popov form. While
some of these new algorithms are quite effective in practice, the complexity
of most of them has not been established.
Our goal has been to develop provably polynomial-time algorithms for these
problems, and to develop tools by which to analyze other algorithms. We will
outline algorithms for the Hermite and Jacobson form which require time polynomial
in the dimension, degree and coefficient-size of the input matrices. One aspect
which has made the problem for Ore polynomials more difficult than the analogous
problems for commutative polynomials has been the lack of the usual determinantal
theory, and basic theorems such as Hadamard's bound, Cramer's rule and Sylvester's
identity. We instead apply the quasideterminantal theory of Gelfand and Retakh
to Ore polynomials and establish tight degree bounds on these determinant-like
objects.
This work is in collaboration with Albert Heinle and Myung Sub Kim
Joseph Haraldson, University of Waterloo
Computing Approximate GCRDs of Differential Operators
We generalize the approximate greatest common divisor problem to the non-commutative,
approximate Greatest Common Right Divisor (GCRD) problem of differential polynomials.
Sufficient conditions for existence of an approximate GCRD are provided and
counter examples provided when these conditions are not satisfied. When an
approximate GCRD exists we show that the problem is locally well posed. In
particular, with a suitable initial guess we show that post-refinement Newton
iteration will converge quadratically to a locally unique optimal solution.
Didier Henrion, LAAS-CNRS, Czech Technical
University in Prague
SPECTRA: solving exactly linear matrix inequalities
The set of real points such that a symmetric pencil is positive semidefinite
is a convex semi-algebraic set called spectrahedron, described by a linear
matrix inequality (LMI). We describe our Maple package SPECTRA that computes
an exact algebraic representation of at least one point in a given spectrahedron,
or decides that it is empty, up to genericity assumptions on the rational
input matrices describing the pencil. The algorithm does not assume the existence
of an interior point, and the computed point minimizes the rank of the pencil
on the spectrahedron. The degree of the algebraic representation of the point
coincides experimentally with the algebraic degree of a generic semidefinite
program associated to the pencil. We provide explicit bounds for the complexity
of our algorithm, which is polynomial in the number of variables when the
size of the pencil is fixed. Our experiments show significant improvements
with respect to state-of-the-art computer algebra algorithms. This is joint
work with Simone Naldi and Mohab Safey El Din.
Ilias Kotsireas, Wilfrid Laurier University
Birds and Frogs: polynomial and matrix questions from design theory
Design Theory is a rich source of very hard problems that can be formulated
as problems involving polynomials with coefficients restricted to {-1,+1}
or {-1,0,+1}, but also as problems involving matrices with elements from {-1,+1}
or {-1,0,+1}. We shall describe some of these problems and their reformulations
as polynomial and matrix questions. The notion that Symbolic Computation methods
can be used with profit to tackle such problems is still in its infancy.
George Labahn, University of Waterloo
Order bases, kernel bases and their use in fast polynomial matrix arithmetic
In this talk we discuss order (or sigma) bases and kernel bases of polynomial
matrices and illustrate their role in fast algorithms for arithmetic for polynomial
matrices. In particular we focus on the use of kernel bases for fast inversion,
triangulation, computing determinants and Hermite normal forms.
Francois Le Gall, The University of Tokyo
Overview of the recent progress on matrix multiplication
Until a few years ago, the fastest algorithm for matrix multiplication was
the "Coppersmith-Winograd algorithm" designed in 1987. In 2010,
Andrew Stothers gave an improvement to the algorithm, the first in 23 years.
Further improvements have then been obtained in 2012 and 2014. In this talk
I will describe the long history of work on the complexity of matrix multiplication,
and discuss these very recent improvements.
Nan Li, Tianjin University
Verified Solutions of Polynomial Systems
Current numerical algorithms such as homotopy continuation could compute approximate
solutions of polynomial systems with dozens of variables and thousands of
solutions, but no-certified outputs are common drawbacks as other numerical
methods and restrict their use in some applications. Implementations such
as alphaCertified by Hauenstein and Sottile and INTLAB by Rump could verify
the existence and uniqueness of regular solutions. However, it is an ill-posed
problem to certify whether a polynomial system has an isolated singular solution,
since arbitrary small perturbations of coefficients may change the answer
from yes to no (an isolated singular solution may change into a cluster of
solutions). In this talk, we will present a symbolic-numeric algorithm for
computing verified error bounds with the property that a slightly perturbed
system is proved to have an isolated singular solution within the computed
bounds. This is a joint work with Lihong Zhi.
Daniel Lichtblau,Wolfram Research
Issues in Blind Deconvolution via Computer Algebra (A Tale from the Dark
Side)
In working with images, particularly from astronomy, medical imaging, and
photographs involving motion or camera shake, it is not uncommon to have blurs
or related out-of-focus problems. Over the past 25 or so years a number of
methods have been proposed for deblurring such images. As these undesirable
aspects can often be represented as convolutions, this is known as deconvolution.
When, as is frequently the case, the convolution (blurring) matrix is unknown,
we are faced with the problem of "blind deconvolution". That is,
in order to reconstruct the unblurred image we also need to find the blur
matrix.
Around the late 1980's a method was proposed that recasts this as a problem
of computing approximate polynomial GCDs. This approach has in recent years
been refined to the point where it is both quite efficient and moderately
robust to presence of noise. Indeed, a considerable part of this work is by
people at this workshop.
In this talk I will show the issues I ran into when trying to work with this
method. Loosely speaking, these obstacles arise from what can be called edge
effects. I will explain why the problem might be more difficult than had been
thought, and outline approaches I have tried in order to get past these issues.
Alas, I have no definitive methods; the hope is to stimulate further work
on the problem
Wen-shin Lee, University of Antwerp
How sub-sampling leads to more robustness and higher resolution in digital
signal processing
Reconstructing a digital signal from its measurements is an inverse problem
that can be extremely ill-posed. Meanwhile, sampling a signal uniformly below
the Shannon-Nyquist rate leads to aliasing, an unwanted effect causing different
signals to become indistinguishable.
We develop a parametric method to retrieve fine-scale information from coarse-scale
measurements. By combining symbolic and numerical tools, we exploit, rather
than avoid, aliasing to regularize the problem, hence to increase the frequency
resolution or to reduce the number of required measurements.
Our development is motivated by some difficulties encountered in magnetic
resonance spectroscopy (MRS) and magnetic resonance imaging (MRI). Both MRS
and MRI are based on nuclear magnetic resonance and have found wide applications
in medicine, chemistry, physics and many scientific fields. MRS is after a
high frequency resolution from a limited amount of data collected from the
time domain. It has already become a major tool to study metabolic changes
in brain tumors, Alzheimer’s disease, as well as the metabolism of other
organs. MRS has been playing an ever more important role in other applications
such as drug development and explosive detection. In MRI, one seeks to obtain
a high-resolution image in the spatial domain from fewer scans in the k-space:
In medical diagnosis, fewer scans can be translated into shorter measuring
time, better quality data, thus inevitable economical benefits.
Joint work with Annie Cuyt.
Anton Leykin, Georgia Institute of Technology
Toward a numerical description of a complex affine scheme
Numerical algebraic geometry provides methodology to describe a complex
affine variety defined by a set of polynomials with approximate numerical
data. One may ask: can similar machinery be used to describe scheme structure?
Can find all components? Multiplicities?
Algorithms for {\em numerical irreducible decomposition} determine only isolated
components of the underlying scheme. The {\em numerical primary decomposition}
is able to output a set of {\em suspect} components that contains the embedded
components and the so-called pseudo-components. A major hurdle to understanding
whether numerical methods are applicable to schemes was: How to test numerically
whether a suspect is an embedded component?
Our joint work with Robert Krone gives an algorithm to answer this question.
Gary Miller, Carnegie Mellon University
Solving Large Optimization Problems using Spectral Graph Theory
Spectral Graph Theory is the interplay between linear algebra and combinatorial
graph theory. One application of this interplay is a nearly linear time solver
for Symmetric Diagonally Dominate system (SDD). This seemingly restrictive
class of systems has received substantial interest and in the last 15 years.
Both algorithm design theory and practical implementations have made substantial
progress. There is also a growing number of problems that can be efficiently
solved using SDD solvers including: image segmentation, image denoising, finding
solutions to elliptic equations, computing maximum flow in a graph, graph
sparsification, and graphics. All these examples can be viewed as special
case of convex optimization problems.
Vincent Neiger, University of Waterloo
Fast computation of minimal interpolation bases with arbitrary weights
Recently, [Zhou - Labahn, 2012] gave an algorithm for Hermite-Padé
approximation that can efficiently compute a polynomial matrix whose rows
form a basis of all solutions and whose degrees are minimal for some prescribed
weights; for efficiency, they require an assumption on the weights. In this
talk, we propose an algorithm which is similarly efficient, makes no assumption
on the weights, and deals with the more general problem of computing minimal
interpolation bases. Those represent bases of solutions for the rational interpolation
problem in [Beckermann - Labahn, 2000]; it encompasses for example Hermite-Padé
approximation, M-Padé approximation, and the bivariate interpolation
step in the Guruswami-Sudan algorithm.
Joint work with Claude-Pierre Jeannerod, Éric Schost, and Gilles Villard.
Riccardo Murri, University of Zurich
The "Rheinfall" parallel algorithm for Gaussian Elimination
The "Rheinfall" algorithm is a way of re-arranging the operations
in Gaussian Elimination, which has a natural parallel and distributed-memory
formulation but degrades gracefully to sequential execution. It is suitable
for elimination of general sparse matrices when exact arithmetic computations
are used.
I will introduce the algorithm and the motivation for its development, and
present some performance results over selected matrices of the SIMC collection,
both for the sequential and the parallel execution modes.
Clément Pernet, École normale supérieure
de Lyon
High performance dense linear algebra: arithmetic, algorithms and their
parallelization
Exact linear algebra is a core component of high performance computer algebra,
used in solving large scale problems arising from applications like cryptanalysis
or computational mathematics. Despite the diversity of coefficient domains,
word size finite fields play a significant role as they allow to deliver the
best computation throughput. We will explain why and explore how efficient
kernels for exact linear algebra over finite fields of various sizes can be
constructed. In this process we will show why the floating point arithmetic
is the best way to do exact computations, looking at the evolution of CPU
architectures over the last 10 year period. After reviewing recent advances
in the algorithmic of computing echelon forms and rank profiles, we will then
describe the similarities and specifities between exact and numerical linear
algebra when it comes to parallelizing Gaussian elimination routines.
Daniel Roche, U.S. Naval Academy
Sparse floats
We introduce the concept of a sparse floating point number. Put simply,
this is a sum of single-precision floats used to represent a real number to
arbitrary precision. In fact, representing a number as an unevaluated sum
is not an entirely new concept, but we hope to leverage advances in efficient
supersparse polynomial arithmetic to the floating-point domain. For simple
arithmetic operations, the sparse float representation has the distinct advantage
of representing the result exactly, without any possibility for underflow
or overflow, and with only polynomial-size blowup in the representation size.
Hence there is the potential to remove the need for guesswork in choosing
a precision for a given calculation, and for improved computational efficiency
in some situtations. We will look at some preliminary results, perspectives,
and limitations of this approach.
Mohab Safey El Din, University Pierre & Marie
Curie
Bit complexity for quadratic minimization problems: the generic case
Polynomial optimization arises in many areas and is at the interplay of computer
science and applied mathematics. While it is in general NP-hard, identifying
non-trivial families of optimization problems that can be solved in polynomial
time is of first importance.
We focus on polynomial minimization of a linear map (e.g. projection on
the first coordinate) restricted to a real set defined by quadratic equations
with integer coefficients; we let $n$ be the number of variables, $p$ be the
number of equations and $\tau$ be a bound on the bit size of the integers
in the input. Since Barvinok's seminal work, it is well-known that the complexity
of solving such a problem is polynomial in $n$ when $p$ is fixed. Barvinok
obtained as a bit complexity bound $\tau n^{O(p^2)}$. This has been improved
later on by Grigoriev and Pasechnik to $\tau n^{O(p)}$ but the complexity
constant in the exponent was still unknown.
Under some genericity assumptions that are naturally satisfied in applications,
we design a probabilistic algorithm that uses $O\tilde{~}\left (p(\tau+n)
n^5 2^{2p}{{n-1}\choose{p-1}}^2\right )$ boolean operations for solving the
input problem. We will discuss some consequences of this result.
Joint work with I. Bomze (Univ. Wien) and E. Schost (Univ. of Waterloo)
Michael Sagraloff, Max Planck Institute for
Informatics
On Recent Progress in Polynomial Root Finding
The computation of the roots of a univariate polynomial is one of the best
studied problems in the areas of computer algebra and numerical analysis,
nevertheless novel algorithms are presented each year. However, until recently,
there has still been a large discrepancy between methods that are considered
to be efficient in practice and those that achieve good theoretical bounds.
In this talk, we will review some recently developed algorithms for real and
complex root finding, which are simple and practical and whose worst-case
bit complexity matches that of the best methods in theory.
For finding the real roots of a polynomial with arbitrary real coefficients,
we consider a subdivision method, denoted by ANewDsc, that combines Descartes'
Rule of Signs and Newton iteration. It exclusively uses approximate arithmetic
and achieves quadratic convergences in almost all iterations. Its worst-case
bit complexity matches those of the best methods available. However, its actual
cost scales with the hardness of the specific instance, that is, it depends
on parameters such as the pairwise distances or the absolute values of the
roots. Practical efficiency is confirmed by a recent implementation based
on RS, the default root finding method integrated in MAPLE.
The above approach partially carries over to complex root finding, however,
for testing a region for roots of a polynomial, a novel method has been developed.
It combines a test based on Pellet's theorem with Graeffe iteration to yield
an almost optimal method for computing the number of roots in some disk. We
further use Weyl's classical quadtree construction and Newton iteration. The
so obtained complex root finder, denoted CIsolate, uses an almost optimal
number of iterations to isolate all complex roots in a given box. It further
achieves complexity bounds that are comparable to the bounds achieved by ANewDsc.
Also, similar to ANewDsc, the algorithm is simple with the potential of being
practical.
Finally, we review a polynomial-time algorithm to compute approximations
of all real roots of a sparse polynomial with arbitrary real coefficients.
The method combines ideas from ANewDsc and CIsolate with a classical approach
using fractional derivatives. For very sparse polynomials with integer coefficients,
the method performs near-optimal.
Ziad Sultan
Adaptive and generic parallel //computation of echelon forms
This work deals with parallelization of Gaussian elimination over a finite
field Z/pZ. This computation is motivated by numerous applications, including
algebraic attacks with polynomial system solving, index calculus attacks on
the discrete logarithm, etc.
We propose efficient parallel algorithms and implementations of LU factorization
over word size prime fields and target shared memory architectures. With respect
to numerical linear algebra, we have
identified three major differences, specific to exact linear algebra.
First, the arithmetic complexity could be dominated by modular reductions.
Therefore, it is mandatory to delay as much as possible these reductions.
Delaying the latter can be done by accumulating several multiplications before
reducing while keeping the result exact. This induces splitting the matrices
in blocks of large size to accumulate multiplication operations as much as
possible, without overflow.
Second, over a finite field stability is not an issue contrarily to numerical
linear algebra. Therefore, asymptotically fast matrix multiplication algorithms,
like Winograd's variant of Strassen's fast algorithm, can more easily be used
on top of the BLAS. Their speed-up increases with matrix dimension. Thus,
on one hand we need to choose sufficiently large blocks well suited to those
fast variants and to delay modular reductions.On the other hand, a finer grain
allows more flexibility to the scheduler for load and communication balancing.
Third, many applications over finite fields require the rank profile of the
matrix which is a key invariant used in many applications using
exact Gaussian elimination, such as Gröbner basis computations and
computational number theory. Moreover, the rank profile is only discovered
during the algorithm and thus the rank deficiency of some blocks leads to
unpredictable dimensions of the tiles or slabs of the matrix. This creates
challenges on the efficiency of the underlying scheduler to handle and balance
dynamic workloads of heterogeneous tasks.
We thus propose and compare different parallelization strategies and block
decomposition to tackle these issues: tile iterative with left-looking, right-looking
and Crout variants with delayed modular reductions; slab and tile recursive
variants; dynamic/static block cutting and adaptive coupling of the different
variants. Experiments demonstrate that the tile recursive variant performs
better and matches the performance of reference numerical software when no
rank deficiency occur. Furthermore, even in the most heterogeneous case, namely
when all pivot blocks are rank deficient, we show that it is possible to maintain
a high efficiency.
Daniel Steffy, Oakland University
Symbolic-numeric computation of cutting planes for integer programming
Cutting planes are valid inequalities that can be generated to strengthen
the linear programming relaxation of integer programming models. Different
classes of cutting planes, algorithms to generate them, and their theoretical
and practical effectiveness, have been widely studied in the literature. State-of-the-art
computational methods for solving integer programs typically employ cutting
planes, in combination with the branch-and-bound algorithm, preprocessing
and other techniques. In this talk we will survey some well known techniques
for generating cutting planes, highlighting how the use of numerical computation
can lead to unreliable results. We will then show of how the computation of
some specific classes of cutting planes can benefit from using a combination
of exact and numerical computation.
Justin Thaler,Yahoo! Labs
A Crash Course on Fast Interactive Proofs
Protocols for verifiable computation enable a computationally weak verifier
to offload computations to a powerful but untrusted prover, while providing
the verifier with a guarantee that the prover performed the computations correctly.
Asymptotically efficient protocols for verifiable computation have been known
for several decades in the form of interactive proofs, PCPs, and their brethren.
However, it is only very recently that these protocols have grown efficient
enough for plausible use in the real world.
In this talk, I will give a crash course on interactive proofs and the algorithmic
techniques underlying their efficient implementation. I will focus in particular
on an interactive proof for matrix multiplication, with costs that are optimal
up to low-order terms.
Zhengfeng Yang, East China Normal University
Error-correcting sparse interpolation of multivariate function
In this talk, we will consider the problems of error-correcting sparse interpolation
of multivariate function, and sparse interpolation in arbitrary orthogonal
bases. 1. Error-correcting decoding is generalized to multivariate sparse
rational function recovery from evaluations that can be numerically inaccurate
and where several evaluations can have severe errors("outliers").
Here we will provide an algorithm than can interpolate a sparse multivariate
rational function from evaluations where the error rate is $1/q$ for any $q>2$.
2. Sparse interpolation with arbitrary orthogonal bases can be regarded as
a generalization of sparse interpolation with the Chebyshev basis( the first
kind). Here we will present new algorithms for interpolating a univariate
black-box polynomial that has a sparse representation by allowing arbitrary
orthogonal bases. This is joint work with Erich L. Kaltofen.
Registrants
as of August
31, 2015
Full Name
|
Institution
|
Alonso, Maria Emilia |
University complutense of Madrid |
Arnold, Andrew |
University of Waterloo |
Bard, Gregory |
University of Wisconsin---Stout |
Blum, Lenore |
CMU |
Diatta, Seny |
University Assane Seck of Ziguinchor |
Emiris, Ioannis |
University of Athens |
Goktas, Unal |
Turgut Ozal University |
Guo, Feng |
Dalian University of Technology |
Hao, Zhiwei |
Academy of Mathematics & System Sciences, Chinese Academy of Sciences |
Haraldson, Joseph |
University of Waterloo |
Henrion, Didier |
CNRS |
Ivan, Daniel |
University of Waterloo |
Kaltofen, Erich |
NCSU |
Krone, Robert |
Queen's University |
Labahn, George |
University of Waterloo |
Le Gall, Francois |
The University of Tokyo |
Lee, Wen-shin |
University of Antwerp |
Leykin, Anton |
Georgia Tech |
Li, Nan |
Tianjin University |
Lichtblau, Daniel |
Wolfram Research |
Maddah, Sumayya Suzy |
XLIM |
Majeed, Asia |
U of Manitoba |
Naldi, Simone |
CNRS |
Neiger, Vincent |
LIP - ENS de Lyon |
Niemerg, Matthew |
|
Pavlov, Dmitry |
Institute of Applied Astronomy |
Pavlov, Dmitry |
Institute of Applied Astronomy |
Pernet, Clément |
Université Grenoble-Alpes |
Riener, Cordian |
Aalto University |
Roche, Daniel |
U.S. Naval Academy |
Safey El Din, Mohab |
Univ. Pierre & Marie Curie |
Sagraloff, Michael |
Max Planck Institute for Informatics |
Thaler, Justin |
Yahoo Labs |
Verschelde, Jan |
University of Illinois at Chicago |
Wang, Chu |
Chinese Academy of Sciences |
Yang, Zhihong |
Academy of Mathematics and System Sciences, Chinese Academy of Sciences |
Yang, Zhengfeng |
East China Normal University |
ZENG, Zhenbing |
Shanghai University |
Zhi, Lihong |
Chinese Academy of Sciences |
Back to top