Optimal Control under Stochastic Target Constraints - Application to portfolio optimization under risk constraints
We study a class of Markovian optimal stochastic control problems in which the controlled process Zν is constrained to satisfy an a.s. constraint Zν(T) ∈ G ⊂ Rd+1 P-a.s. at some final time T > 0. When the set is of the form G := {(x, y) ∈ Rd× R : g(x, y) ≥ 0}, with g non-decreasing in y, we provide a Hamilton-Jacobi-Bellman characterization of the associated value function. It gives rise to a state constraint problem where the constraint can be expressed in terms of an auxiliary value function w which characterizes the set D := {(t, Zν (t)) ∈ [0, T] × Rd+1 : Zν(T) ∈ G a.s. for some ν}. Contrary to standard state constraint problems, the domain D is not given a-priori and we do not need to impose conditions on its boundary. It is naturally incorporated in the auxiliary value function w which is itself a viscosity solution of a non-linear parabolic PDE. Applying ideas recently developed in Bouchard, Elie and Touzi (2008), our general result also allows to consider optimal control problems with moment constraints of the form E[g(Zν(T))] ≥ 0
or P[g(Zν(T)) ≥ 0] ≥ p.