Lipschitz Regularized Deep Neural Networks Converge and are robust to adversarial perturbations
Talk Abstract: Deep Neural Networks perform much better than traditional Machine Learning methods in a number of tasks. However, they lack performance guarantees, which limits the use of the technology in real world and real time applications where errors can be costly. The first step towards these guarantees is a proof of generalization. However a recent Machine Learning paper, "Understanding Deep Learning requires rethinking generalization" showed that the traditional machine learning generalization theory does not apply to deep neural networks.
We will prove that Lipschitz regularized DNNs converge, (with a rate of convergence), which implies generalization. We use at theory which is based on variational methods for inverse problems. The regularization leads to robust networks which are more resistant to adversarial examples