Projective Splitting Methods for Sums of Monotone Operators
This talk presents a new class of decomposition methods for finding a root of a sum of maximal monotone operators. We begin with some example applications of this very general problem class (which essentially subsumes convex optimization) and a brief review of prior decomposition methods for it. The new, "projective" algorithm class is constructed by defining an "extended" solution set in a product space, and applying a classical separating-hyperplane projection method to obtain a point in the set. The key innovation is that one can construct the separating hyperplanes by a procedure that performs calculations on only one operator at a time. The resulting family of decomposition methods has an unprecedented amount of flexibility and a large number of parameters. Preliminary computational experiments on a class of simple problems suggest that there are ways to set these parameters that achieve much faster convergence than earlier monotone operator splitting methods. Our approach can also deal with sums of more than two operators without first reducing them to the two-operator case.
Joint work with B.F. Svaiter, IMPA, Rio De Janeiro
Finding Low-rank Matrices via Nuclear Norm Minimization
In many engineering applications, notions such as order or dimension of a model can be expressed as the rank of an appropriate matrix. To choose simple models, we seek low-rank matrices. For example, a low-rank matrix could correspond to a low-degree statistical model for a random process, or an embedding in a low-dimensional space.
The rank minimization problem is known to be NP-hard. This talk discusses a convex relaxation, minimizing the nuclear norm of the matrix. We present recent results on guaranteed recovery of the lowest rank matrix when the constraints are linear equalities and the linear mapping satisfies a certain isometry condition. This is a generalization of a result by Candes and Tao (2004) in compressed sensing, from the case of sparse vectors to that of low-rank matrices. The recovery result can also be extended to the case of approximately low-rank matrices and noisy measurements.
Variational convergence of bivariate functions: lopsided convergence
We explore convergence notions for bivariate functions that yield convergence and stability results for their maxinf (or minsup) points. This lays the foundations for the study of the stability of solutions to variational inequalities, the solutions of inclusions, of Nash equilibrium points of non-cooperative games and Walras economic equilibrium points, of fixed points, of solutions to inclusions, the primal and dual solutions of convex optimization problems and of zero-sum games. These applications will be dealt with in a couple of accompanying papers.
Basis Reduction and Integer Programming Approaches to Number Partitioning
The number partitioning problem (NPP) is to divide a set of positive integers a1,...,an into two disjoint subsets such that the difference of the subset sums, called the discrepancy (D), is minimized. In the balanced version of NPP, the two subsets must have the same cardinality. NPP is NP-complete, has a well-characterized phase transition, and finds applications in VLSI design, multiprocessor scheduling, cryptography etc. When the a_j's are chosen uniformly from [1,R] for some number R, it is known that the expected optimal discrepancy is O(\sqrt{n} 2-n R). The best known polynomial time approximation algorithm was proposed by Karmarkar and Karp (KK), and gives discrepancies that are O(n-0.72 log n R). When R >> 2n, the optimal discrepancy is much bigger than zero (i.e., there is no perfect partition), and the corresponding optimal partition is also unique. Instances with such large values of R constitute the hard phase of NPP. We will discuss several methods for tackling NPP instances belonging to the hard phase. First, we propose a mixed integer program (MIP) model for solving NPP. We consider a basis reduction-based reformulation of the MIP model in order to handle the typically huge aj's encountered in the hard phase. Second, we describe transformations of NPP and balanced NPP to instances of the shortest vector problem (SVP), which we try to solve using direct application of basis reduction algorithms. Finally, we propose a heuristic called truncated NPP, where we solve an NPP instance with a'j = aj/T for some number T in place of the original instance. We show that the discrepancy of the partition thus obtained is O(D*+ nT), where D* is the optimal discrepancy.
Applications of Computational Convex Analysis
Computational Convex Analysis focuses on the practical computation of fundamental transforms arising in Convex Analysis. After recalling the current state of the art, I will give examples illustrating the usefulness of these algorithms in a wide variety of fields: thermodynamics, robot navigation, medical imaging, image processing, and network communications.
Variational Inequality Modeling of Equilibrium
A wide variety of equilibrium problems concern multi-agent optimization, where the objective and constraints of an agent are affected by the decisions of the other agents faced with their own objectives and constraints. Quasi-variational inequalities can be used to model such problems, but the technical assumptions needed to make this work and provide existence are quite restrictive, and support for computing solutions is meager at best.
By drawing on methodology in optimization theory and various tricks of reformulation, it is possible to model the same equilibrium problems by true variational inequalities instead of quasi-variational inequalities. The assumptions then are looser and the results are much more satisfactory. Moreover, this opens the way to a substantial range of numerical approaches.
Exact sparse reconstruction and its geometry
The problem of finding the sparsest solution of an underdetermined linear system is combinatorial and generally NP-hard. Recent results have shown that, under certain conditions, the minimum one-norm solution coincides with the sparsest solution. The problem can then be phrased as a linear program.
In this talk I will review and explain the geometrical interpretation of the one-norm minimization problem and the conditions under which the sparsest solution is obtained.
Bregman distances and Klee sets
In 1960, Klee showed that a subset of a Euclidean space must be a singleton provided that each point in the space has a unique farthest point in the set. This classical result has received much attention; in fact, the Hilbert space version is a famous open problem. In this paper, we consider Klee sets from a new perspective. Rather than measuring distance induced by a norm, we focus on the case when distance is meant in the sense of Bregman, i.e., induced by a convex function. When the convex function has sufficiently nice properties, then --- analogously to the Euclidean distance case --- every Klee set must be a singleton. We provide two proofs of this result, based on Monotone Operator Theory and on Nonsmooth Analysis. The latter approach leads to results that complement work by Hiriart-Urruty on the Euclidean case. Joint work with Bauschke, Ye, and Yuan.