Inverting generative models and applications in inverse problems
Obtaining accurate signal models is a fundamental problem when solving inverse problems. This is because the typical inverse problem, be it denoising, inpainting, compressed sensing, phase retrieval, medical or seismic imaging, is ill-posed. To find an accurate solution tractably, we need a "good" model of the underlying signal class that can be "imposed" on the solution. Well known, now classical, examples of such models include sparsity with respect to a known basis (compressed sensing), having small total variation (image denoising), and having low rank (matrix and tensor completion). In the Bayesian framework, which provides an alternative approach, signals in a particular class are modeled as realizations of a random vector. The objective is then to model the (prior) distribution of this random vector and use that to identify the posterior distribution, which leads to, for example, the MAP estimator. Recent advances in training deep neural networks have shown that various signal classes can be modeled using generative models leading to generator networks that map a low-dimensional latent space to the high-dimensional signal space and provide a straight-forward way of sampling the signal space according to the prior distribution. Such models were successfully incorporated in the literature to the Bayesian framework to empirically solve certain inverse problems involving, e.g., handwritten digits and faces.
In this talk, we present an algorithm, which we call PLUGIn (Partially Linearized Update for Generative Inversion), for recovering an unknown latent (code) vector from (noisy) observations of a signal, under a known generative model with provable guarantees. Furthermore, while requiring an overall expansivity, our convergence guarantees hold for networks with some contractive layers. We will also show that PLUGIn is versatile and can be adopted to various inverse problems such as compressed sensing and reconstruction from quantized measurements.
This is joint work with Babhru Joshi, Xiaowei Li, and Yaniv Plan.