Efficient Algorithms for Robust Optimization with Applications to Machine Learning

Principal Investigator
Khaled Elbassioni
Department
Electrical & Computer Engineering
Focus Area
Robotics, AI, & Data Science
Efficient Algorithms for Robust Optimization with Applications to Machine Learning

Standard optimization algorithms, such as those arising in machine learning, assume precise knowledge of their inputs, and optimize their performance based on this assumption. However, in real-life applications, the data collected (either training data or problem parameters) includes some sort of noise, which can be used to model the uncertainty in the data. For example, in classification algorithms, the training data points may be known up to a limited precision with errors introduced possibly due to inaccuracy in measurements, and distortions by privacy-preserving data perturbation. Clearly, an algorithm designed based on such distorted data to optimize a certain objective function would not yield reliable results, if no special consideration of such uncertainty is taken. Several previous works have considered the incorporation of such noise into the machine learning and other optimization algorithms. A general approach, within the framework of robust optimization, is to characterize the noise by some generic assumptions, e.g., it lies in a (small) bounded convex region around a nominal value, and optimize the objective function for the worst-case possibility within this region.

In many cases, it turns out that the obtained convex robust optimization problem can be cast as a second-order cone program (SOCP), or more generally, as a semidefinite program (SDP). Most (if not all) previous work essentially provides reductions from the underlying machine learning problems under uncertainty to robust convex optimization, but stop at this point since one can then rely on available general convex programming (CP) solvers to tackle the problem. However, it is observed that the computational efficiency of available general CP solvers is not practical for very large data sets. On the other hand, most of the problems arising in machine learning have a special structure that can be exploited to provide faster algorithms. This motivates the exploration of specialized CP methods, such as first-order methods. Such techniques have been extensively explored for the non-robust optimization versions of machine learning algorithms, but we are not aware of any extensions to the robust versions.

The first part of this project aims at delivering practical algorithms for such robust versions of common machine learning optimization algorithms, as well as theoretical and empirical analyses of their performance on real-world data. The situation becomes even more challenging when the solution space is discrete, e.g., in the case of clustering problems, in which case the problem is computationally hard. In the non-robust case, a standard approach is to formulate and solve a linear programming (LP) or SDP-relaxation of the problem, then use careful rounding techniques to map the obtained continuous solution into the discrete solution space without losing much in the objective value. Extending these techniques to the robust version makes the second main part of this project. Based on the obtained results, a software tool will be built to provide efficient algorithms for robust versions of a wide range of machine learning algorithms.

Efficient Algorithms for Robust Optimization with Applications to Machine Learning