Numerical and Computational Challenges
in Science and Engineering Program
LECTURE
March 25, 11am, Room 210
Robert M. Freund, MIT Sloan School of Management
Two Topics on the Complexity of Convex Optimization, One Computational
and One Theoretical
We consider the following quite general convex optimization problem
format: $$\begin{array}{lclr} G_d: & {\rm minimize}_x & c^{T}x
\\ & \mbox{ s.t. } & Ax=b\\ & & x \in P, \\\end{array}$$
\noindent where $P$ is a closed convex set, not necessarily a cone.
Linear
optimization is a special case of this format in the case when $P$ is
polyhedral, and a modern complexity theory for linear optimization has
been developed based on a notion of condition number $C(d)$ for the
data $d=(A,b,c)$ for $G_d$. We develop some computational experience
and test the practical relevance of this condition number theory, as
applied to problem instances that one might encounter in practice, using
the NETLIB suite of LP problem instances. We present empirical evidence
that indicates that 42% of the variation in IPM iterations among the
NETLIB suite problem instances is accounted for by log C(d) of the problem
instances after pre-processing.
Our theoretical work concerns the complexity of convex optimization
using geometry-based measures rather than algebraic or data-based measures
for obtaining complexity bounds. We bound the complexity of computing
an almost-optimal solution of $G_d$ in terms of natural geometry-based
measures of the feasible region and the level-set of almost-optimal
solutions, relative to a given {\em reference point} $x^r$ that might
be close to the feasible region and/or the almost-optimal level set.
Back to Thematic
Year Index