On a smaller SDP relaxation for polynomial optimization
Polynomial optimization problems (POP) is the problem of minimizing a polynomial objective function over the set defined by polynomial equalities and/or inequalities. It is well-known that finding the global optimal value of POP is NP-hard. Lasserre and Parrilo independently proposed SDP approaches to find a tighter lower bound of the global value of POP. However, the resulting SDP relaxation problems often become too large-scale to solve. To recovery this difficulty, we propose a smaller SDP relaxation. We show that our SDP relaxation converges to the global optimal value if the objective function has a small perturbation with higher order. In our SDP relaxation, we do not add a small perturbation explicitly, but exploit numerical errors in the practical computation of primal-dual interior-point methods as perturbation in the objective function. We also present some numerical results and show the computational efficiency. This is a joint work with Masakazu Muramatsu.