In Preparation
A. A. Ahmadi, M. Curmei, and G. Hall, “Monotonous polynomials and sums of squares,” In Preparation.
A. A. Ahmadi and G. Hall, “On the construction of converging hierarchies for polynomial optimization based on certificates of global positivity,” Submitted. Download Here
A. A. Ahmadi, E. de Klerk, and G. Hall, “Polynomial Norms,” Submitted. Download Here
A. A. Ahmadi, G. Hall, A. Papachristodoulou, J. Saunderson, and Y. Zheng, “Improving efficiency and scalability of sum of squares optimization:recent advances and limitations,” CDC 2017, Forthcoming.
G. Hall, “Optimization over Nonnegative and Convex Polynomials with and without Semidefinite Programming,” 2018.Abstract

The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, control, machine learning, game theory, and combinatorics, among others. A number of breakthrough papers in the early 2000s showed that this problem, long thought to be out of reach, could be tackled by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs).

In the first part of this thesis, we present two methods for approximately solving large-scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. We then focus on the problem of finding tight lower bounds on polynomial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for constructing converging hierarchies of lower bounds on POPs whose computation simply requires the ability to multiply certain fixed polynomials together and to check nonnegativity of the coefficients of their product.

In the second part of this thesis, we focus on the theory and applications of the problem of optimizing over convex polynomials, a subcase of the problem of optimizing over nonnegative polynomials. On the theoretical side, we show that the problem of testing whether a cubic polynomial is convex over a box is NP-hard. This result is minimal in the degree of the polynomial and complements previously-known results on checking convexity of a polynomial globally. We also study norms generated by convex forms and provide an SDP hierarchy for optimizing over them. This requires an extension of a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. On the application side, we study a problem of interest to robotics and motion planning, which involves modeling complex environments with simpler representations. In this setup, we are interested in containing 3D-point clouds within polynomial sublevel sets of minimum volume. We also study two applications in machine learning: the first is multivariate monotone regression, which is motivated by some applications in pricing; the second concerns a specific subclass of optimization problems called difference of convex (DC) programs, which appear naturally in machine learning problems. We show how our techniques can be used to optimally reformulate DC programs in order to speed up some of the best-known algorithms used for solving them.

A. A. Ahmadi, G. Hall, A. Makadia, and V. Sindhwani, “Geometry of 3D Environments and Sum of Squares Polynomials,” in RSS 2017, 2017. On ArXivAbstract

Motivated by applications in robotics and computer vision, we study problems related to spatial reasoning of a 3D environment using sublevel sets of polynomials. These include: tightly containing a cloud of points (e.g., representing an obstacle) with convex or nearly-convex basic semialgebraic sets, computation of Euclidean distances between two such sets, separation of two convex basic semalgebraic sets that overlap, and tight containment of the union of several basic semialgebraic sets with a single convex one. We use algebraic techniques from sum of squares optimization that reduce all these tasks to semidefinite programs of small size and present numerical experiments in realistic scenarios.

Download here.
A. A. Ahmadi and G. Hall, “DC Decomposition of Nonconvex Polynomials with Algebraic Techniques,” Mathematical Programming - Series B, 2017. On ArXivAbstract

We consider the problem of decomposing a multivariate polynomial as the difference of two convex polynomials. We introduce algebraic techniques which reduce this task to linear, second order cone, and semidefinite programming. This allows us to optimize over subsets of valid difference of convex decompositions (dcds) and find ones that speed up the convex-concave procedure (CCP). We prove, however, that optimizing over the entire set of dcds is NP-hard.

Download Here
A. A. Ahmadi and G. Hall, “Sum of Squares Basis Pursuit with Linear and Second Order Cone Programming,” in Algebraic and Geometric Methods in Discrete Mathematics, Contemporary Mathematics, 2017. On ArXivAbstract

We devise a scheme for solving an iterative sequence of linear programs (LPs) or second order cone programs (SOCPs) to approximate the optimal value of semidefinite and sum of squares (SOS) programs. The first LP and SOCP-based bounds in the sequence come from the recent work of Ahmadi and Majumdar on diagonally dominant sum of squares (DSOS) and scaled diagonally dominant sum of squares (SDSOS) polynomials. We then iteratively improve on these bounds by pursuing better bases in which more relevant SOS polynomials admit a DSOS or SDSOS representation. Different interpretations of the procedure from primal and dual perspectives are given. While the approach is applicable to semidefinite relaxations of general polynomial programs, we apply it to two problems of discrete optimization: the maximum independent set problem and the partition problem. We further show that some completely trivial instances of the partition problem lead to strictly positive polynomials on the boundary of the sum of squares cone and hence make the SOS relaxation fail.

Download Here
A. A. Ahmadi, S. Dash, and G. Hall, “Optimization over Structured Subsets of Positive Semidefinite Matrices via Column Generation,” Discrete Optimization, 2016.Abstract

We develop algorithms for inner approximating the cone of positive semidefinite matrices via linear programming and second order cone programming. Starting with an initial linear algebraic approximation suggested recently by Ahmadi and Majumdar, we describe an iterative process through which our approximation is improved at every step. This is done using ideas from column generation in large-scale linear and integer programming. We then apply these techniques to approximate the sum of squares cone in a nonconvex polynomial optimization setting, and the copositive cone for a discrete optimization problem.

Download Here
E. Abbe, A. S. Bandeira, and G. Hall, “Exact Recovery in the Stochastic Block Model,” IEEE: Transactions on Information Theory, vol. 62, no. 1, 2016. On ArXivAbstract

The stochastic block model (SBM) with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behaviour. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability p within clusters and q across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower-bounds on the scaling of |pq| to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if α=pn/log(n) and β=qn/log(n) are constant (with α>β), recovering the communities with high probability is possible if (α+β)/2-√αβ>1 and impossible if (α+β)/2−√αβ<1. In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst-case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest it may achieve the threshold. An efficient algorithm which succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.

Download Here