Sparse Interpolation via Super-resolution, Compressed Sensing and Prony
We show that solving sparse interpolation problem reduces to solving a discrete super-resolution problem on the $n$-dimensional Torus. Therefore the semidefinite programming approach initiated by Candes and Fernandez-Granda in the univariate case can be applied. In particular, exact recovery is guaranteed provided that a geometric spacing condition on the ``supports" holds and the number of evaluations are sufficiently many (but not many). It also turns out that exact recovery is guaranteed by the (compressed sensing) LP-formulation of $\ell_1$-norm minimization provided that the evaluations are made in a certain manner and even though the RIP sufficient condition for exact recovery is not satisfied. A first set of numerical experiments seems to suggest that the super resolution technique for sparse interpolation copes with noise better than Prony's method, at the cost of an extra computational burden (i.e. a semidefinite optimization)