Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen P. Boyd is active.

Publication


Featured researches published by Stephen P. Boyd.


Foundations and Trends® in Machine Learning archive | 2011

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Stephen P. Boyd; Neal Parikh; Eric Chu; Borja Peleato; Jonathan Eckstein

Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarns method of partial inverses, Dykstras alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.


SIAM Review archive | 1996

Semidefinite programming

Lieven Vandenberghe; Stephen P. Boyd

In semidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefinite programming unifies several standard problems (e.g., linear and quadratic programming) and finds many applications in engineering and combinatorial optimization. Although semidefinite programs are much more general than linear programs, they are not much harder to solve. Most interior-point methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial worst-case complexity and perform very well in practice. This paper gives a survey of the theory and applications of semidefinite programs and an introduction to primaldual interior-point methods for their solution.


Linear Algebra and its Applications | 1998

Applications of second-Order cone programming '

Miguel Sousa Lobo; Lieven Vandenberghe; Stephen P. Boyd; Hervé Lebret

In a second-Order cone program (SOCP) a linear function is minimized over the intersection of an affine set and the product of second-Order (quadratic) cones. SOCPs are nonlinear convex Problems that include linear and (convex) quadratic programs as special cases, but are less general than semidefinite programs (SDPs). Several efficient primaldual interior-Point methods for SOCP have been developed in the last few years. After reviewing the basic theory of SOCPs, we describe general families of Problems that tan be recast as SOCPs. These include robust linear programming and robust leastsquares Problems, Problems involving sums or maxima of norms, or with convex hyperbolic constraints. We discuss a variety of engineering applications, such as filter design, antenna array weight design, truss design, and grasping forte optimization in robotics. We describe an efficient primaldual interior-Point method for solving SOCPs, which shares many of the features of primaldual interior-Point methods for linear program


Lecture Notes in Control and Information Sciences | 2008

Graph Implementations for Nonsmooth Convex Programs

Michael C. Grant; Stephen P. Boyd

We describe graph implementations, a generic method for representing a convex function via its epigraph, described in a disciplined convex programming framework. This simple and natural idea allows a very wide variety of smooth and nonsmooth convex programs to be easily specified and efficiently solved, using interiorpoint methods for smooth or cone convex programs.


IEEE Journal of Selected Topics in Signal Processing | 2007

An Interior-Point Method for Large-Scale

Seung-Jean Kim; Kwangmoo Koh; Michael Lustig; Stephen P. Boyd; Dimitry Gorinevsky

Recently, a lot of attention has been paid to regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as -regularized least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper, we describe a specialized interior-point method for solving large-scale -regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.


Foundations and Trends in Optimization archive | 2014

\ell_1

Neal Parikh; Stephen P. Boyd

This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newtons method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newtons method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.


international symposium on information theory | 2001

-Regularized Least Squares

Wei Yu; Wonjong Rhee; Stephen P. Boyd; John M. Cioffi

This paper proposes an efficient numerical algorithm to compute the optimal input distribution that maximizes the sum capacity of a Gaussian multiple-access channel with vector inputs and a vector output. The numerical algorithm has an iterative water-filling interpretation. The algorithm converges from any starting point, and it reaches within 1/2 nats per user per output dimension from the sum capacity after just one iteration. The characterization of sum capacity also allows an upper bound and a lower bound for the entire capacity region to be derived.


american control conference | 2001

Proximal Algorithms

Maryam Fazel; H. Hindi; Stephen P. Boyd

We describe a generalization of the trace heuristic that applies to general nonsymmetric, even non-square, matrices, and reduces to the trace heuristic when the matrix is positive semidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm. We show that this problem can be reduced to a semidefinite program, hence efficiently solved. To motivate the heuristic, we, show that the dual spectral norm is the convex envelope of the rank on the set of matrices with norm less than one. We demonstrate the method on the problem of minimum-order system approximation.


IEEE Transactions on Circuits and Systems | 1985

Iterative water-filling for Gaussian vector multiple-access channels

Stephen P. Boyd; Leon O. Chua

Using the notion of fading memory we prove very strong versions of two folk theorems. The first is that any time-invariant (TI) continuous nonlinear operator can be approximated by a Volterra series operator, and the second is that the approximating operator can be realized as a finite-dimensional linear dynamical system with a nonlinear readout map. While previous approximation results are valid over finite time intervals and for signals in compact sets, the approximations presented here hold for all time and for signals in useful (noncompact) sets. The discretetime analog of the second theorem asserts that any TI operator with fading memory can be approximated (in our strong sense) by a nonlinear moving- average operator. Some further discussion of the notion of fading memory is given.


IEEE Transactions on Signal Processing | 2009

A rank minimization heuristic with application to minimum order system approximation

Siddharth Joshi; Stephen P. Boyd

We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.

Collaboration


Dive into the Stephen P. Boyd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge