|
THEMATIC PROGRAMS |
|||
November 23, 2024 | ||||
Robert Almgren, University of Toronto In the trading of large portfolios, the volatility risk eliminated
by rapidly completing the trade program must be balanced against the
market impact costs incurred when rapid execution is demanded. In previous
work by R Almgren and N Chriss, explicit solutions were given for the
case in which per-share impact costs are linear in trading rate. In
this work we consider two nonlinear extensions. First, we obtain explicit
solutions in the case that market impact depends nonlinearly on trading
rate, including the popular square-root law. Second, we consider the
case in which rapid trading increases not only the expected value of
cost but also its uncertainty; although we do not obtain fully explicit
solutions, we are able to give a complete asymptotic description and
extract conclusions for practical trading. Leif Andersen, General Re Securities This talk describes a practical algorithm based on Monte Carlo simulation
for the pricing of multi-dimensional American (i.e., continuously Peter Carr, Courant Institute, NYU We study the probabilistic underpinnings of the Dupire forward partial differential equation for European option values. We provide a link with the literature on time reversal of Markov processes. We derive a new result called Put Call Reversal and provide various applications. In particular, we show how it can be used to simplify semi-static hedging. Matt Davison, Department of Applied Mathematics,
University of Western Ontario We have developed a model for electricity prices which is a hybrid
of the bottom-up "stack-based" pricing traditionally used
in the power engineering community with the top-down financial time-series
driven model customary in quantitative finance. Our model allows engineering
data on plant failure and meteorological data on power demand to be
used to estimate model parameters - important in the case of electricity
markets for which spot price data is often either limited or nonexistent.
We simplify and adapt our model to optimize it for the task of pricing
a simple swing option to buy electrical power. The resulting pricing
model may be written in terms of mixtures of Poisson distributions.
The swing option may then be priced using relatively straightforward
backward recursion techniques. Paul Glasserman, Columbia University Joint work with Nicolas Merener This paper develops formulas for pricing caps and swaptions in LIBOR market models with jumps. The arbitrage-free dynamics of this class of models were characterized in Glasserman and Kou (1999) in a framework allowing for very general jump processes. For computational purposes, it is convenient to model jump times as Poisson processes; however, the Poisson property is not preserved under the changes of measure commonly used to derive prices in the LIBOR market model framework. In particular, jumps cannot be Poisson under both a forward measure and the spot measure, and this complicates pricing. To develop pricing formulas, we approximate the dynamics of a forward rate or swap rate using a scalar jump-diffusion process with time-varying parameters. We develop an exact formula for the price of an option on this jump-diffusion through explicit inversion of a Fourier transform. We then use this formula to price caps and swaptions by choosing the parameters of the scalar diffusion to approximate the arbitrage-free dynamics of the underlying forward or swap rate. We apply this method to two classes of models: one in which the jumps in all forward rates are Poisson under the spot measure, and one in which the jumps in each forward rate are Poisson under its associated forward measure. Numerical examples demonstrate the accuracy of the approximations. Adam Kolkiewicz, University of Waterloo Computational tools in modern finance are often classified as either
numerical or simulation methods. While the former provide fast and accurate
answers in less complex problem, the latter offer the only viable tools
for pricing instruments contingent on several assets. In the talk, we
present a framework that allows combining these apparently different
approaches. This is possible by introducing a smooth Monte Carlo estimator
of transition density functions for stochastic differential equations.
The estimator, though nonparametric, is unbiased and exhibits a rate
of convergence that is typical to parametric problems. When used to
approximate functionals of terminal prices, it reduces variance by a
factor that depends on the ``smoothness" of the density estimate.
We illustrate some possible applications of the method using European
and American style financial instruments. For the latter, we focus on
methods based on regression techniques, like the one considered recently
by Longstaff and Schwartz, and on the low discrepancy mesh method (Boyle
et al.). Yuying Li, Cornell University Joint work with Thomas F. Coleman, Yuying Li and Cristina Patron In an incomplete market, it is impossible to eliminate the intrinsic
risk of an option that cannot be replicated. Thus it is unclear what
are the David Pooley, University of Waterloo The pricing equations derived from uncertain volatility/transaction cost models in finance are often cast in the form of nonlinear partial differential equations. Implicit timestepping leads to a set of nonlinear algebraic equations which must be solved at each timestep. To solve these equations, an iterative approach is employed. In this talk, we show the convergence of a particular iterative scheme for one factor uncertain volatility models. We also demonstrate how non-monotone discretization schemes (such as standard Crank-Nicolson timestepping) can converge to incorrect solutions, or lead to instability. Numerical examples are provided. Joint work with Peter Forsyth and Ken Vetzal Philip Protter, Cornell University We present a precise mathematical description of the Longstaff-Schwartz algorithm, breaking it down into two steps and proving the convergence. The second step is a Monte Carlo step, and we obtain the rate of convergence and the asymptotic normalized error. The talk is based on joint work with E. Clement and D. Lamberton. Dan Rosen, Algorithmics This paper introduces a general option-valuation framework for loans that provides valuation information at loan origination and supports mark-to-market analysis, portfolio credit risk and asset and liability management for the entire portfolio. We describe, in detail, the main structures found in commercial loans and the practical assumptions required to model the state-contingent cash flows resulting from these structures. We stress the need to account properly for the embedded options such as prepayment, revolvers, and grid pricing. The characteristics of the credit risk model necessary to capture the main features of the problem are described. A case study is used to addressed the data available in practice , calibration methodologies and the impact of various modelling assumptions. Finally, we outline some of the computational challenges of performing portfolio mark-to-market and risk measurement and discuss various solutions. Though we focus primarily on large corporate and middle-market loans, the approach is applicable more generally to bonds and credit derivatives. William F. Shadwick, The Finance Development
Centre Limited Joint work with B.A. Shadwick, The Finance Development Centre Limited In spite of the growing sophistication of option pricing technology, relatively primitive approaches to the computation of prices and hedging parameters are still in widespread use. This appears to be an inevitable side effect of the growth of derivatives markets. As one practitioner noted, "The problem with the Black Scholes formula is that it makes every idiot think he can price an option." Increasingly, the options he thinks he can price are what used to be known as exotic. One of the most common examples of this problem is the variety of ad hoc methods which have been devised to deal with volatility smile or skew. There is a straightforward extension of the original Black Scholes Merton 1-factor model which good engineering practice would suggest should be exhausted prior to moving to more complex remedies. However, there is no shortage of examples of trading desks which purport to be using more 'advanced' approaches, many of which prove to be self contradictory under a discouragingly low level of scrutiny. It is not uncommon to find that traders and quants who espouse these advances are, in their view, routinely making large 'profits' buying or selling long dated options to large sophisticated counterparties. Needless to say, these P&L effects often use mark to model and proprietary risk exposure analysis in an essential way. We review the maximal 'local volatility' extension of the 1-factor Black Scholes Merton (BSM) model and illustrate one good reason for this situation. We show that the computation of prices and sensitivities in this framework requires numerical expertise sufficient to solve the forced, variable coefficient BSM equation and that as a result, one really does have all or nothing. The low level of the pde solvers in common use in the finance industry
then explains the prevalence and longevity of the ad hoc approaches
to variable volatility. We provide some examples which show just how
dangerous these approaches can be in the process of pricing or hedging
even simple derivative positions. Juergen Topper, Andersen, Germany Options on several underlyings are a common exotic product in the equity and FX derivatives market. The value of these kinds of options also depends on the correlation of the underlyings. We will present a model to compute a lower bound for the price of this option. The model, represented by a non-linear parabolic PDE, is implemented with finite elements in order to be able to compute accurate cross-derivatives We will demonstrate the results with several derivatives from the European market. Stanislav Uryasev, University of Florida Value-at-Risk (VaR), a widely used performance measure, answers the
question: An alternative measure of loss, with more attractive properties, is Conditional Value-at-Risk (CVaR), see [6,7,8]. CVaR coincides in many special cases with Upper CVaR, which is the conditional expectation of losses exceeding VaR (also called Mean Excess Loss and Expected Shortfall), see [7]. However, Acerbi et al. [1,2] recently redefined Expected Shortfall in a manner consistent with the CVaR definition. Acerbi et al. [1,2] proved several important mathematical results on properties of CVaR, including asymptotic convergence of sample estimates to CVaR. CVaR, is a coherent measure of risk [5,7] (sub-additive, convex, and other nice mathematical properties). CVaR can be represented as a weighted average of VaR and Upper CVaR. This seems surprising, in the face of neither VaR nor Upper CVaR being coherent. The weights arise from the particular way that CVaR "splits the atom" of probability at the VaR value, when one exists. CVaR can be used in conjunction with VaR and is applicable to the estimation of risks with non-symmetric return-loss distributions. Although CVaR has not become a standard in the finance industry, it is likely to play a major role. CVaR is able to quantify dangers beyond value-at-risk, [1,6,7,8,10]. CVaR can be optimized using linear programming, which allows handling portfolios with very large numbers of instruments and scenarios. Numerical experiments indicate that for symmetric distributions the minimization of CVaR also leads to near optimal solutions in VaR terms because CVaR is always greater than or equal to VaR, [6]. Moreover, when the return-loss distribution is normal, these two measures are equivalent [6,7], i.e., they provide the same optimal portfolio. However, for skewed distributions, VaR optimal and CVaR optimal portfolios may be very different, [10]. Similar to the Markowitz mean-variance approach, CVaR can be used in return-risk analyses. For instance, we can calculate a portfolio with a specified return and minimal CVaR. Alternatively, we can constrain CVaR and find a portfolio with maximal return, see [4,7]. Also, we can specify several CVaR constraints simultaneously with various confidence levels (thereby shaping the loss distribution), which provides a flexible and powerful risk management tool. Several case studies showed that risk optimization with the CVaR performance
function and constraints can be done for large portfolios and a large
number of scenarios with relatively small computational resources. For
instance, a problem with 1,000 instruments and 20,000 scenarios can
be optimized on a 700 MHz PC in less than one minute using the CPLEX
LP solver. A case study on the hedging of a portfolio of options using
CVaR is included in [6]. Also, the CVaR minimization approach was applied
to the credit risk management of a portfolio of bonds, [3]. A case study
on optimization of a portfolio of stocks with CVaR constraints is included
in [4]. The numerical efficiency and stability of CVaR calculations
are illustrated with an example of index tracking in [7]. Several related
papers on probabilistic constrained optimization are included in [8]. Yong Wang, RBC Financial Group, Toronto Our presentation includes two parts. In the first part, I will discuss some problems, which arise from our trading activities. In particular, I will discuss the exotic type transactions such as variance swap, volatility swaps, and correlation swaps etc. In the second part of the presentation, my colleague, Quan Zhao, will talk about the Risk Decomposition using orthogonal arrays. Tony Ware, University of Calgary We consider the use of partial differential equation-based models for
energy contracts. In particular, we describe a semi-Lagrangian finite-element
method solving the equations that arise in the context of one- and two-factor
models. We demonstrate the application of this method to the pricing
of various types of swing options, and make use of the results to explore
the properties of swing contracts. Petter Wiberg, University of Toronto The value-at-risk is the maximum loss that a portfolio might suffer
over a given holding period with a certain confidence level. In recent
years, value-at-risk has become a benchmark for measuring financial
risk used by both practitioners and regulators. In this seminar, we
discuss value-at-risk from a modeling and simulation perspective. We
present a new efficient algorithm for computing value-at-risk and the
value-at-risk gradient for portfolios of derivative securities. In particular,
we discuss dimensional reduction of the model, and present some recent
results on perturbation theory and applications to hedging of derivatives
portfolios.
|
||||