Non-Separable Relaxations of a Class of Rank Penalties
: Rank penalties can be hard to handle in optimization frameworks due to non-convexity and discontinuity. Strong approximations have been a subject of intense study and numerous formulations have been proposed in recent years. Most of these can be described as separable, meaning that they apply a penalty to each singular value based on its size, without considering the joint distribution.
In this talk we present a class of non-separable penalties and give a recipe for computing strong relaxations suitable for optimization. In our analysis of this formulation we first give conditions that ensure that the globally optimal solution of the relaxation is the same as that of the original (unrelaxed) objective. We then show how a stationary point can be guaranteed to be unique under the RIP assumption (despite non-convexity of the framework).
In the final part of the talk we show that our relaxations can be reformulated by introducing a bilinear (over-)parametrization of the search space. This opens up the possibility of using 2nd order methods such as VarPro/Wiberg or Levenberg-Marquardt to solve a general class or rank penalized problems. We evaluate the resulting algorithm on real computer vision such as structure from motion.