Exponential Expression Rates for Neural Operator Approximations to the Solution Operator of Certain FBSDEs
The numerical solution to forward-backwards stochastic differential equations (FBSDEs) plays a central role in optimal control and its applications to game theory, economics, finance, and insurance. Most classical numerical and modern deep-learning schemes, however, have the disadvantage that they must be re-run every time the user specifies a new set of parameters and/or terminal conditions for an FBSDE, meaning that these methods cannot feasibly solve large families of FBSDEs. One possible solution is to consider a neural operator (NO) which ``learns to solve FBSDEs''; the NO outputs the solution to an FBSDE given inputs: a terminal condition and a generator of the backward process. Though the existence of such NOs is not surprising, it is unclear if they can be implemented using a few parameters. We establish exponential rates for NO approximations of the solution operator to a broad class of fully coupled FBSDEs with random terminal time. Our result is based on new exponential approximation rates for a class of convolutional NOs, which can efficiently encode Green's function to the Elliptic boundary problems associated with our FBSDEs. Joint work with: Takashi Furuya