Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrei Patrascu is active.

Publication


Featured researches published by Andrei Patrascu.


Computational Optimization and Applications | 2014

A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints

Ion Necoara; Andrei Patrascu

In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ϵ-optimal solution in


Journal of Global Optimization | 2015

Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization

Andrei Patrascu; Ion Necoara

\mathcal{O}(n^{2}/\epsilon)


Optimization Methods & Software | 2016

Iteration complexity analysis of dual first-order methods for conic convex programming

Ion Necoara; Andrei Patrascu

iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.


IEEE Transactions on Automatic Control | 2015

Random Coordinate Descent Methods for

Andrei Patrascu; Ion Necoara

In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function consisting of a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of the above structure, we propose random coordinate descent algorithms and analyze their convergence properties. For the general case, when the objective function is nonconvex and composite we prove asymptotic convergence for the sequences generated by our algorithms to stationary points and sublinear rate of convergence in expectation for some optimality measure. Additionally, if the objective function satisfies an error bound condition we derive a local linear rate of convergence for the expected values of the objective function. We also present extensive numerical experiments for evaluating the performance of our algorithms in comparison with state-of-the-art methods.


Optimization Methods & Software | 2017

\ell_{0}

Ion Necoara; Andrei Patrascu; François Glineur

In this paper we provide a detailed analysis of the iteration complexity of dual first-order methods for solving conic convex problems. When it is difficult to project on the primal feasible set described by conic and convex constraints, we use the Lagrangian relaxation to handle the conic constraints and then, we apply dual first-order algorithms for solving the corresponding dual problem. We give convergence analysis for dual first-order algorithms (dual gradient and fast gradient algorithms): we provide sublinear or linear estimates on the primal suboptimality and feasibility violation of the generated approximate primal solutions. Our analysis relies on the Lipschitz property of the gradient of the dual function or an error bound property of the dual. Furthermore, the iteration complexity analysis is based on two types of approximate primal solutions: the last primal iterate or an average primal sequence.


Optimization Letters | 2017

Regularized Convex Optimization

Andrei Patrascu; Ion Necoara; Quoc Tran-Dinh

In this paper, we study the minimization of ℓ0 regularized optimization problems, where the objective function is composed of a smooth convex function and the ℓ0 regularization. We analyze optimality conditions for this nonconvex problem which lead to the separation of local minima into two restricted classes that are nested and around the set of global minima. Based on these restricted classes of local minima, we devise two new random coordinate descent type methods for solving these problems. In particular, we analyze the convergence properties of an iterative hard thresholding based random coordinate descent algorithm for which we prove that any limit point is a local minimum from the first restricted class of local minimizers. Then, we analyze the convergence of a random proximal alternating minimization method and show that any limit point of this algorithm is a local minima from the second restricted class of local minimizers. We also provide numerical experiments which show the superior behavior of our methods in comparison with the usual iterative hard thresholding algorithm.


mediterranean conference on control and automation | 2016

Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming

Andrei Patrascu; Ion Necoara

In this paper we present a complete iteration complexity analysis of inexact first-order Lagrangian and penalty methods for solving cone-constrained convex problems that have or may not have optimal Lagrange multipliers that close the duality gap. We first assume the existence of optimal Lagrange multipliers and study primal–dual first-order methods based on inexact information and augmented Lagrangian smoothing or Nesterov-type smoothing. For inexact (fast) gradient augmented Lagrangian methods, we derive an overall computational complexity of projections onto a simple primal set in order to attain an ε-optimal solution of the conic convex problem. For the inexact fast gradient method combined with Nesterov-type smoothing, we derive computational complexity projections onto the same set. Then, we assume that optimal Lagrange multipliers might not exist for the cone-constrained convex problem, and analyse the fast gradient method for solving penalty reformulations of the problem. For the fast gradient method combined with penalty framework, we also derive an overall computational complexity of projections onto a simple primal set to attain an ε-optimal solution for the original problem.


international conference on system theory, control and computing | 2015

Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization

Andrei Patrascu; Ion Necoara; Marian Barbu; Sergiu Caraman

In this paper we study two inexact fast augmented Lagrangian algorithms for solving linearly constrained convex optimization problems. Our methods rely on a combination of the excessive-gap-like smoothing technique introduced in Nesterov (SIAM J Optim 16(1):235–249, 2005) and the general inexact oracle framework studied in Devolder (Math Program 146:37–75, 2014). We develop and analyze two augmented based algorithmic instances with constant and adaptive smoothness parameters, and derive a total computational complexity estimate in terms of projections on a simple primal feasible set for each algorithm. For the constant parameter algorithm we obtain the overall computational complexity of order


conference on decision and control | 2015

Complexity certifications of inexact projection primal gradient method for convex problems: Application to embedded MPC

Ion Necoara; Andrei Patrascu


international conference on system theory, control and computing | 2017

Implementable fast augmented Lagrangian optimization algorithm with application in embedded MPC

Ion Necoara; Andrei Patrascu; Dragos Clipici; Marian Barbu

\mathcal {O}(\frac{1}{\epsilon ^{5/4}})

Collaboration


Dive into the Andrei Patrascu's collaboration.

Top Co-Authors

Avatar

Ion Necoara

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Dragos Clipici

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Angelia Nedic

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Marian Barbu

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

François Glineur

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Quoc Tran-Dinh

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Panagiotis Patrinos

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Rolf Findeisen

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Florin Stoican

Norwegian University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge