Michel Baes
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michel Baes.
Siam Journal on Optimization | 2013
Michel Baes; Michael Bürgisser; Arkadi Nemirovski
In this paper, we derive a randomized version of the mirror-prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterovs smoothing techniques or standard mirror-prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems they can solve. Our method allows us to use stochastic approximations of matrix exponentials. We prove that our randomized scheme decreases significantly the complexity of its deterministic counterpart for large-scale matrix saddle-point problems. Numerical experiments illustrate and confirm our theoretical results.
arXiv: Optimization and Control | 2013
Michel Baes; Timm Oertel; Christian Wagner; Robert Weismantel
In this paper, we address the problem of minimizing a convex function f over a convex set, with the extra constraint that some variables must be integer. This problem, even when f is a piecewise linear function, is NP-hard. We study an algorithmic approach to this problem, postponing its hardness to the realization of an oracle. If this oracle can be realized in polynomial time, then the problem can be solved in polynomial time as well. For problems with two integer variables, we show with a novel geometric construction how to implement the oracle efficiently, that is, in \(\mathcal {O}(\ln(B))\) approximate minimizations of f over the continuous variables, where B is a known bound on the absolute value of the integer variables. Our algorithm can be adapted to find the second best point of a purely integer convex optimization problem in two dimensions, and more generally its k-th best point. This observation allows us to formulate a finite-time algorithm for mixed-integer convex optimization.
Optimization Methods & Software | 2014
Michel Baes; Michael Bürgisser
We introduce an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objectives gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously existing variants of optimal first-order methods, this upper bound inequality was constructed from points computed during the current iteration. It was thus not possible to select the optimal value for this Lipschitz constant at the beginning of the iteration. In our variant, the upper bound inequality is constructed from points available before the current iteration, offering us the possibility to set the Lipschitz constant to its optimal value at once. This procedure, even if efficient in practice, presents a higher worse-case complexity than standard optimal first-order methods. We propose an alternative strategy that retains the practical efficiency of this procedure, while having an optimal worse-case complexity. Our generic scheme can be adapted for smoothing techniques. We perform numerical experiments on large-scale eigenvalue minimization problems, allowing us to reduce computation times by two to three orders of magnitude for the largest problems we considered over standard optimal methods.
international conference on acoustics, speech, and signal processing | 2012
Graeme Pope; Christoph Studer; Michel Baes
This paper deals with the recovery of signals that admit an approximately sparse representation in some known dictionary (possibly over-complete) and are corrupted by additive noise. In particular, we consider additive measurement noise with bounded ℓ<sub>p</sub>-norm for p ≥ 2, and we minimize the ℓ<sub>q</sub> quasi-norm (with q ∈ (0, 1]) of the signal vector. We develop coherence-based recovery guarantees for which stable recovery via generalized basis-pursuit de-quantizing (BPDQ<sub>p,q</sub>) is possible. We finally show that depending on the measurement-noise model and the choice of the ℓ<sub>p</sub>-norm used in the constraint, (BPDQ<sub>p,q</sub>) significantly outperforms classical basis pursuit de-noising (BPDN).
Mathematical Programming | 2012
Michel Baes; Alberto Del Pia; Yurii Nesterov; Schmuel Onn; Robert Weismantel
This paper is about the minimization of Lipschitz-continuous and strongly convex functions over integer points in polytopes. Our results are related to the rate of convergence of a black-box algorithm that iteratively solves special quadratic integer problems with a constant approximation factor. Despite the generality of the underlying problem, we prove that we can find efficiently, with respect to our assumptions regarding the encoding of the problem, a feasible solution whose objective function value is close to the optimal value. We also show that this proximity result is the best possible up to a factor polynomial in the encoding length of the problem.
Mathematical Programming | 2016
Michel Baes; Timm Oertel; Robert Weismantel
We extend in two ways the standard Karush–Kuhn–Tucker optimality conditions to problems with a convex objective, convex functional constraints, and the extra requirement that some of the variables must be integral. While the standard Karush–Kuhn–Tucker conditions involve separating hyperplanes, our extension is based on mixed-integer-free polyhedra. Our optimality conditions allow us to define an exact dual of our original mixed-integer convex problem.
Mathematical Methods of Operations Research | 2013
Michel Baes; Michael Bürgisser
We show that the Hedge algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular instance of Dual Averaging schemes, which have recently been introduced by Nesterov for regret minimization. Based on this interpretation, we establish three alternative methods of the Hedge algorithm: one in the form of the original method, but with optimal parameters, one that requires less a priori information, and one that is better adapted to the context of the Hedge algorithm. All our modified methods have convergence results that are better or at least as good as the performance guarantees of the vanilla method. In numerical experiments, our methods significantly outperform the original scheme.
Optimization | 2015
Michel Baes; Huiling Lin
We first discuss some properties of the solution set of a monotone symmetric cone linear complementarity problem (SCLCP), and then consider the limiting behaviour of a sequence of strictly feasible solutions within a wide neighbourhood of central trajectory for the monotone SCLCP. Under assumptions of strict complementarity and Slater’s condition, we provide four different characterizations of a Lipschitzian error bound for the monotone SCLCP in general Euclidean Jordan algebras. Thanks to the observation that a pair of primal-dual convex quadratic symmetric cone programming (CQSCP) problems can be exactly formulated as the monotone SCLCP, thus we obtain the same error bound results for CQSCP as a by-product.
international congress on mathematical software | 2010
David Adjiashvili; Michel Baes; Philipp Rostalski
Determining whether an ellipsoid contains the intersection of many concentric ellipsoids is an NP-hard problem. In this paper, we study various convex relaxations of this problem, namely two semidefinite relaxations and a second-order cone relaxation. We establish some links between these relaxations and perform extensive numerical testings to verify their exactness, their computational load and their stability. As an application of this problem, we study an issue emerging from an aircraft wing design problem: how can we simplify the description of a feasible loads region?
Archive | 2009
Michel Baes