|
Fields Industrial Optimization Seminar
2008-09
|
Supported by
|
|
|
The inaugural meeting of the Fields Industrial Optimization Seminar
took place on November 2, 2004. This series will meet once a month,
on the first Tuesday, in the early evening. Each meeting will comprise
two related lectures on a topic in optimization; typically, one
speaker will be a university-based research and the other from the
private or government sector.We welcome the participation of everyone
in the academic or industrial community with an interest in optimization
-- theory or practice, expert or student.
Please subscribe to the Fields mail list
to be informed of upcoming seminars.
=================================================
UPCOMING SEMINARS
Seminars start at 5:00 pm at the Fields Institute, 222 College
Street, Room 230
June 2, 2009 |
Brian Denton, North Carolina State University
Optimization of Surgery Delivery Systems
Audio
and Slides of the Talk
Scheduling systems are used in many industries to increase
the utilization of resources, match workload to available
capacity, and smooth the flow of customers through a service
system. They are also important for healthcare delivery where
applications include scheduling of patients to outpatient
clinics and the design of operating room schedules. In this
talk I will discuss stochastic optimization models for scheduling
surgeries at outpatient clinics and hospitals. I will discuss
three related problems. The first involves setting individual
procedure start times for a single operating room (OR) given
uncertainty in the duration of procedures. The objective of
this problem is to minimize three competing criteria: patient
and OR team waiting time, OR idle time, and overtime. The
second problem involves the allocation of surgeries across
multiple ORs with the goal of balancing the fixed cost of
opening ORs with the expected cost of total overtime. The
third problem involves setting optimal arrival times for patients
to an outpatient surgical suite comprising multiple activities
including: intake processes, surgery, and recovery. For each
problem I will describe the mathematical model, structural
properties, methodologies employed, and sample numerical results
based on real data to illustrate the impact of the model.
and
Franklin Dexter, Professor and Director of
Operations Research, Operations Research, Department of Anesthesia,
University of Iowa
Big Open (IE) Problems in Operating Room Management
Audio
& Slides of the Talk
My talk will address my perspectives on the top three open
(IE/OR/MS) problems in operating room management from the
perspective of a person whose focus is implementation of methods
to improve anesthesia group and hospital operating room managerial
decision-making. First, there needs to be research in education
of personnel with an IE/OR/MS background to work in perioperative
medicine. Peoples hypothesis (opinions) based on analogies
from other domains have not applied when studied. Second,
medical monitoring, physician order entry, and textual entry
systems need to be used for real-time, automatic determination
of conditions for managerial decision-making. These data fusion
problems rely heavily on understanding the clinical context
which has hampered development. Third, methods are needed
to provide individual physicians with recommendations on how
to increase their productivity, starting with simpler problems
based on perfect knowledge and then advancing to incorporate
measurement error and missing values. Complementary research
is needed on how to communicate those results to the individuals.
Franklin Dexter is Professor at the University of Iowa. As
part of his job in the Department of Anesthesia, he has performed
more than 220 consultations for more than 105 corporations.
He has published more than 250 papers in the fields of operating
room management and anesthesia. He has given more than more
than 110 invited presentations. He is Editor of the Section
on Economics, Education, and Policy of the journal Anesthesia
& Analgesia. Twice a year, he teaches a four-day intensive
course in operating room management at the University. He
completed a Sc.B. in Applied Mathematics & Biology with
Honors from Brown University; Masters Degree & Ph.D. in
Biomedical Engineering, with specialization in biomathematics,
from Case Western Reserve University; M.D. degree from Case
Western Reserve University School of Medicine; and Anesthesiology
residency at the University of Iowa. More details, as well
as the comprehensive bibliography of scientific papers in
OR management, are at www.FranklinDexter.net
|
PAST SEMINARS |
April 30, 2009 |
David Bremner, University of New Brunswick
Optimal Separating Hyperplanes
Separating hyperplanes are important in the theory of convex
geometry and the practise of optimization. The notion of an
optimal separating hyperplane is in some sense as old as that
of linear separation itself, with early proofs of the separating
hyperplane theorem computing what in modern terms is a ``maximum
margin hyperplane''.
Finding a best separating hyperplane is a kind of generalized
linear programming problem, where each input point yields
a linear constraint, and the objective function is defined
by whatever notion of ``best'' is relevant for the application.
Naturally the theoretical and practical tractability of the
resulting optimization problem depends on what exactly this
objective function is. Even for objectives arising from one
application, radically different results are possible.
Despite the long history, there remain many interesting questions
of both a theoretical and a practical/computational nature.
In this talk I will explore in particular the notion of a
maximum margin hyperplane (i.e. pair of parallel separators
with maximum distance between them), which is intimately connected
to machine learning and pattern
recognition, and the more discrete notion of the most unbalanced
separator through a particular point, which leads to ``halfspace-depth''
or ``Tukey depth'', and applications in non-parametric statistics.
and
Ken Clarkson, IBM
Coresets, Sparse Greedy Approximation,
and the Frank-Wolfe Algorithm
The problem of maximizing a concave function f(x) in a simplex
S can be solved approximately by a simple greedy algorithm.
That algorithm can find a point x_k, in k steps, so that f(x_k)
_> f(x*) - O(1/k) - O(1/k), where f(x*) is the maximum
value of f in S. This algorithm and analysis were known before,
and related to problems of statistics and machine learning,
such as boosting, training support vector machines, regression,
and density mixture estimation. In other work, coming from
computational geometry, the existence of ?-coresets was shown
for the minimum enclosing ball problem, by means of a simple
greedy algorithm. These two algorithms, and some other similar
greedy algorithms, are all special cases of the classical
Frank-Wolfe algorithm. I'll tie these results together, review
some stronger convergence results, and generalize or strengthen
some coreset bounds.
|
April 7, 2009 |
Dominique Orban, Ecole Polytechnique de Montréal
The various facets of degeneracy in smooth optimization
At the term of two decades of boiling activity in the optimization
community, interior-point methods for smooth problems emerge
as the de-facto standard for the efficient solution of large-scale
linear, convex, and nonconvex programming problems. Not only
are those methods robust and scalable, they turn out to be
equally well suited, with little modification, to so-called
degenerate problems. In this talk, we review various aspects
of degeneracy in nonlinear programming, their origins, their
effect on the problem, the algorithm, and the linear algebra
at the core of interior-point methods, and the remedies that
can be put in place.
and
Pat Piperni, Bombardier Aerospace
Preliminary Aero-Structural Optimization of a Business
Jet
An overview of an industrial approach to the aero-structural
optimization of a large business jet is presented herein.
The optimization methodology is based on the integration of
aerodynamic and structural analysis codes that combine computational,
analytical, and semi-empirical methods, validated in an aircraft
design environment. The aerodynamics sub-space is analyzed
with a three-dimensional Transonic Small Disturbance code
capable of predicting the drag of a complete, trimmed aircraft
within engineering accuracy. The design of the wing structure
is
accomplished using a quasi-analytical method that defines
the layout of the ribs and geometry of the spar webs, spar-caps
and skin-stringer panels, and predicts the wing flexural properties
and weight distribution. In addition, the prediction of operating
economics as well as the integrated en route performance is
coupled into the scheme by way of fractional change functional
transformations. In order to illustrate the automated design
system capabilities, the methodology is applied to the optimization
of a large business jet comprising winglets, rear-mounted
engines and a
T-tail configuration. The aircraft-level design optimization
goal in this instance is to minimize a cost function for a
fixed range mission assuming a constant Maximum Take-Off Weight.
|
March 3, 2009 |
Audio and
Slides of Talk
Matheus Grasselli, McMaster University
Managerial flexibility in incomplete markets and systems
of RBSDEs
We model managerial flexibility between different modes of
project as a compound real options problem. In incomplete
markets this typically leads to a utility optimization problem,
related to the indifference value of the relevant cash-flows,
combined with an optimal stopping problem, related to the
early exercise nature of the managerial decisions. I will
explain how to formulate both problems in framework of systems
of reflected backward stochastic differential equations, use
comparison results to characterize how the exercise thresholds
vary with the model parameters, and confirm these dependencies
through numerical experiments.
and
Helmut Mausser, Algorithmics Toronto
Credit Risk Optimization
Measures of a portfolio's credit risk are typically based
on extreme quantiles of the credit loss distribution. While
a simulation-based framework allows credit risk to be assessed
under realistic assumptions, the large number of samples,
or scenarios, required to obtain robust risk estimates presents
a computational challenge. In this context, we describe and
compare several optimization models that derive from a structural,
or Merton-type, model for credit risk. In particular, we consider
formulations that combine scenarios with analytic approximations
in order to reduce the volume of data required in the optimization.
This is based on joint work with Ian Iscoe, Alex Kreinin and
Oleksandr Romanko.
|
Feb. 3, 2009 |
Audio and
Slides of Talk
Ross Mitchell, University of Calgary
Automatic Fusion of 3D Data: An Important Optimization
Task in Modern Science
Alignment of three-dimensional (3D) datasets is an important
optimization task in many scientific disciplines. For example,
it is used in geophysics for prospecting and seismic monitoring.
It is also used to visualize and compare changes in the spatial
distribution of activated genes within multi-cellular organisms.
It is an important tool for monitoring climate change, since
it aids the assessment of remotely sensed, multi-spectral,
satellite data of the earth. In medicine, alignment is needed
to correct for motion during medical imaging, and to combine
medical images from multiple individuals to create atlases
or maps of normal and abnormal anatomy, physiology or function.
And in clinical practice, automatic alignment is used to detect
changes in 3D medical images to aid the diagnosis, treatment,
and monitoring of disease.
However, widespread use of automatic 3D alignment algorithms
has been hindered by their computational costs, lengthy execution
times and dif?culties in tuning and guidance. We describe
a new method that executes on commodity graphical processing
units (GPUs) and is designed to take advantage of their massive
parallelism and high memory bandwidth. Our method allows interactive,
high quality visualization of both rigid and deformable 3D
alignment of multi-contrast datasets. It has accuracy equal
to, or better than, several common techniques, yet reduces
computation time by one to two orders of magnitude.
and
Andrew Hope, Princess Margaret Hospital Toronto
Medical Imaging and Radiation Oncology: Targeting to Cure
Medical imaging plays a crucial role in the diagnosis, treatment,
and monitoring of cancer patients and their disease. Radiation
oncologists rely on accurate imaging to plan and target cancer
therapy that is, largely, non-invasive. Advances in imaging
now allow physicians to visualize motion, integrate data from
multiple studies, correct images for distortion during treatment.
Imaging is now an integrated part of modern radiation therapy
and is applied daily. In this overview, we will explore recent
advances in medical imaging including four-dimensional computed
tomography, cone- beam computed tomography, and the use of
multi-modality imaging for target delineation.
|
Dec. 2, 2008 |
Audio and
Slides of Talk
Michel X. Goemans, MIT
Approximating Submodular Functions Everywhere
Submodular functions are a key concept in combinatorial
optimization. Algorithms that involve submodular functions
usually assume that they are given by a (value) oracle. We
consider the problem of approximating a non-negative, monotone,
submodular function f everywhere, after only poly(n) oracle
queries (where n is the size of the ground set). Our main
result is a deterministic algorithm that makes poly(n) oracle
queries and derives a function such that, for *every* set
S, g(S) approximates f(S) within a factor a(n), where a(n)=sqrt(n+1)
for rank functions of matroids and a(n)=O(sqrt(n) log(n))
for general monotone submodular functions. Our result is based
on approximately finding a maximum volume inscribed ellipsoid
in a symmetrized polymatroid, and the analysis involves various
properties of submodular functions and polymatroids. Our algorithm
is tight up to logarithmic factors, even for rank functions
of a matroid. Our approximation result allows to approximate
optimization problems involving submodular functions, such
as the submodular max-min fair allocation problem.
This is based on joint work with Nick Harvey, Satoru Iwata
and Vahab Mirrokni.
Jon Lee, IBM TJ Watson Research Center
Nonlinear Combinatorial Optimization
I will discuss algorithms for solving problems of the form
max { f(Wx) : x in F }, where f is a nonlinear function, the
rows of the matrix W can be thought of as describing several
linear objectives, and F is a discrete set of feasible solutions.
The nonlinear function f can be though of as balancing the
various linear objectives.We look at various combinatorial
settings for the choices of F. So this can be seen to fit
somewhere on the landscape between multicriteria optimization
and nonlinear combinatorial optimization.
In the most general setting, the model is intractable, so
we look at cases that yield polynomial-time algorithms and
approximation schemes. Our specific results involve broad
special cases. E.g., regarding F, we consider multi-knapsacks,
matchings, matroids and polymatroids, Regarding f, we consider
minimization of concave and convex functions, generalization
of norms, etc. Regarding W, to get positive results, we usually
need to assume that it has a fixed number of rows and the
entries are encoded in unary. I will describe an interesting
application to fitting polynomial models to multivariate data.
Most of our algorithms were mainly designed for theoretical
efficiency. Nonetheless, there is a good argument for trying
to implememt some of these methods on modern high-performance
platforms. We describe such a successful effort for the model
fitting application, using ultra-high precision solution of
linear systems on the BlueGene supercomputer.
All of this is joint work, difererent parts with Berstein,
Gunnels, Margulies, Maruri-Aguilar, Onn, Riccomagno, Weismantel,
Wynn.
|
Oct. 7, 2008 |
Audio and
Slides of Talk
Endre Boros, Rutgers University
Quadratic binary optimization and its applications
Quadratic binary optimization (QUBO) is a natural mathematical
model for numerous practical problems. In this overview lecture
we recall the origins of this model, and some of the numerous
results providing efficient solutions in some case, or efficient
bounds and persistencies in some other cases. We shall also
discuss connections to the very popular "graph-cut"
approach of Computer vision, and provide extensive computational
experiments with various applications.
Gabriel Tavares, Fair Isaac Corporation
Linear Programming Plus Branch-and-cut for Solving Certain
Difficult QUBO Problems
A linear optimization framework is proposed to solve the Quadratic
Unconstrained Binary Optimization (QUBO) problem. It consists
of 3 stages: stage (i) calculates an incumbent solution by
using QUBO heuristics (a Multi-Start Tabu-Search approach
is considered); stage (ii) calculates an optimal Basic Feasible
Solution (BFS) to the relaxation of the Linearization Model
(LM) for QUBO plus some valid cuts; and stage (iii) solves
the mixed integer program LM by using the incumbent solution
and by enhancing it with the subset of cuts binding at the
BFS. The effectiveness of the approach, implemented using
Xpress-Mosel, is shown when solving: minimum 3-partition,
MAX-CUT and nMAX-SAT.
|
Nov. 4, 2008 |
Audio and
Slides of Talk
Philip E. Gill, University of California, San Diego
What's new in SQP methods?
In his 1963 PhD thesis, Wilson proposed the first sequential
quadratic programming (SQP) method for the solution of constrained
nonlinear optimization problems. In the intervening 45 years,
SQP methods have evolved into a powerful and effective class
of methods for a wide range of optimization problems. We review
some of the most prominent developments in SQP methods since
1963 and discuss the relationship of SQP methods to other
popular methods, including augmented Lagrangian methods and
interior methods.
Given the scope and utility of nonlinear optimization, it
is not surprising that SQP methods are still a subject of
active research. Recent developments in methods for mixed
integer nonlinear programming and the minimization of functions
subject to differential equation constraints has led to a
heightened interest in methods that may be ``hot started''
from a good approximate solution. We discuss the role of SQP
methods in this context, with particular reference to some
recent enhancements to our large-scale SQP package SNOPT.
We end with some discussion of the challenges associated with
formulating algorithms that can exploit multicore and GPU-based
computer architectures.
and
Bohdan Kaluzny, Centre for Operational Research &
Analysis, Defence Research & Development Canada.
Combinatorial Optimization in Canadian Forces Airlift Modeling
The Canadian Forces make extensive use of strategic and
tactical airlift assets to deploy and sustain overseas forces.
Various-sized cargo aircraft are employed for missions to
deliver equipment, supplies, and passengers. Maximizing the
payload delivered per flight while maintaining a safe load
balance is of high importance. This talk presents Genetic
Annealing for Loading of Aircraft: a Heuristic Aiding Deployment
(GALAHAD). This loading module uses a genetic algorithm based
on a bin packing heuristic and a convex hull fitness function
to generate realistic estimates of the number of airlift assets
required to transport a given manifest of items. The generated
load plans are subject to load balancing constraints which
are modeled by Mixed Integer Linear Programming.
|
|
|