Neural Integer Optimization: Learning to Satisfy Generic Constraints
The design of algorithms for hard discrete optimization problems often follows one of two paradigms: general algorithms, heuristic or exact, that operate on a wide range of problems, e.g. branch-and-bound or local search; or specialized heuristics that exploit the structure of a given problem. Both paradigms rely on the ingenuity of the algorithm designer, coupled with trial-and-error on a set of instances of interest. Recently, there has been a surge in research on learning heuristics for combinatorial optimization problems over a distribution of instances, a departure from the classical paradigms mentioned earlier. Despite promising results on problems such as the TSP, existing learning methods are only capable of handling simple constraints, e.g. subtour constraints for insertion heuristics in TSP. Our work is an attempt at learning heuristics for discrete optimization problems subject to generic constraints, i.e. constraints that may be hard to satisfy, e.g. general linear inequalities. The key contribution of this work lies in incorporating projection into a recurrent neural network model that generates solutions to a discrete optimization problem with intricate constraints. We apply our framework to 0-1 linear optimization problems, and show promising results on instances from Two-Stage Stochastic Programming, the Generalized Assignment Problem and the Satisfiability problem.