Time Consistency of Risk Averse Stochastic Programs
In various settings time consistency in dynamic programming has been addressed by many authors going all the way back to original developments by Richard Bellman. The basic idea of the involved dynamic principle is that a policy designed at the first stage, before observing realizations of the random data, should not be changed at the later stages of the decision process. This is a rather vague principle since this leaves a choice of optimality criteria at every stage of the process conditional on an observed realization of the random data. We discuss this from the point of view of modern theory of risk averse stochastic programming. In particular we discuss time consistent decision making by addressing risk measures which are recursive, nested, dynamically or time consistent.