Markov Chain Monte Carlo Methods for Dynamic Stochastic and Robust Optimization
This talk will focus on the application of Markov Chain Monte Carlo methods to stochastic optimization models of the form:
$$\min_{x^{T+1}\in X} E[\sum_{t=1}^T f_t(x_t,x_{t+1})],$$
where $x^t=(x_1,\ldots,x_t)$, $X$ includes nonanticipativity constraints, and $f_t$ is convex with extended value to include
additional constraints. The talk will describe conditions for asymptotic convergence of the Markov chain of a version of
particle filtering that maintains a fixed number $N$ particles in each period. Extensions of the results will be described
for robust objectives of the form:
$$\min_{x^{T+1}\in X}[\max_{\xi_1\in\Xi_1}f_1(x_1,x_2(\xi_1),\xi_1) +\max_{\xi_2(x_2(\xi_1),\xi_1)\in\Xi_2} f_2(x_2(\xi_1),x_3(\xi^2),\xi^2)+\cdots +\max_{\xi_T(x^{T}(\xi^{T-1}),\xi^{T-1})\in\Xi_T} f_{T}(x_{T}(\xi^{T-1}),x_{T+1}(\xi^T),\xi^T)],$$
where $\xi^t=(\xi_1,\ldots,\xi_t)$ represents random observations up to period $t$ and $\Xi_t$ is a compact uncertainty set for
each period's realizations. We will particularly consider cases with linear dynamics and methods for identifying affine policies
from the states of the Markov chain.