Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ion Necoara is active.

Publication


Featured researches published by Ion Necoara.


IEEE Transactions on Automatic Control | 2008

Application of a Smoothing Technique to Decomposition in Convex Optimization

Ion Necoara; Johan A. K. Suykens

Dual decomposition is a powerful technique for deriving decomposition schemes for convex optimization problems with separable structure. Although the augmented Lagrangian is computationally more stable than the ordinary Lagrangian, the ldquoprox-termrdquo destroys the separability of the given problem. In this technical note we use another approach to obtain a smooth Lagrangian, based on a smoothing technique developed by Nesterov, which preserves separability of the problem. With this approach we derive a new decomposition method, called ldquoproximal center algorithm,rdquo which from the viewpoint of efficiency estimates improves the bounds on the number of iterations of the classical dual gradient scheme by an order of magnitude.


Computational Optimization and Applications | 2014

A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints

Ion Necoara; Andrei Patrascu

In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ϵ-optimal solution in


Siam Journal on Control and Optimization | 2014

Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC

Valentin Nedelcu; Ion Necoara; Quoc Tran-Dinh

\mathcal{O}(n^{2}/\epsilon)


conference on decision and control | 2008

Application of the proximal center decomposition method to distributed model predictive control

Ion Necoara; Dang Doan; Johan A. K. Suykens

iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.


IEEE Transactions on Automatic Control | 2014

Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition

Ion Necoara; Valentin Nedelcu

We study the computational complexity certification of inexact gradient augmented Lagrangian methods for solving convex optimization problems with complicated constraints. We solve the augmented Lagrangian dual problem that arises from the relaxation of complicating constraints with gradient and fast gradient methods based on inexact first order information. Moreover, since the exact solution of the augmented Lagrangian primal problem is hard to compute in practice, we solve this problem up to some given inner accuracy. We derive relations between the inner and the outer accuracy of the primal and dual problems and we give a full convergence rate analysis for both gradient and fast gradient algorithms. We provide estimates on the primal and dual suboptimality and on primal feasibility violation of the generated approximate primal and dual solutions. Our analysis relies on the Lipschitz property of the dual function and on inexact dual gradients. We also discuss implementation aspects of the proposed algor...


Journal of Global Optimization | 2015

Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization

Andrei Patrascu; Ion Necoara

In this paper we present a dual-based decomposition method, called here the proximal center method, to solve distributed model predictive control (MPC) problems for coupled dynamical systems but with decoupled cost and constraints. We show that the centralized MPC problem can be recast as a separable convex problem for which our method can be applied. In (L. Necoara et al., 2008) we have provided convergence proofs and efficiency estimates for the proximal center method which improves with one order of magnitude the bounds on the number of iterations of the classical dual subgradient method. The new method is suitable for application to distributed MPC since it is highly parallelizable, each subsystem uses local information and the coordination between the local MPC controllers is performed via the Lagrange multipliers corresponding to the coupled dynamics. Simulation results are also included.


IEEE Transactions on Automatic Control | 2013

Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks

Ion Necoara

We propose and analyze two dual methods based on inexact gradient information and averaging that generate approximate primal solutions for smooth convex problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first-order methods based on approximate gradients for which we prove sublinear rate of convergence. In particular, we provide a complete rate analysis and estimates on the primal feasibility violation and primal and dual suboptimality of the generated approximate primal and dual solutions. Moreover, we solve approximately the inner problems with a linearly convergent parallel coordinate descent algorithm. Our analysis relies on the Lipschitz property of the dual function and inexact dual gradients. Further, we combine these methods with dual decomposition and constraint tightening and apply this framework to linear model predictive control obtaining a suboptimal and feasible control scheme.


IEEE Transactions on Signal Processing | 2010

Improved Dual Decomposition Based Optimization for DSL Dynamic Spectrum Management

Paschalis Tsiaflakis; Ion Necoara; Johan A. K. Suykens; Marc Moonen

In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function consisting of a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of the above structure, we propose random coordinate descent algorithms and analyze their convergence properties. For the general case, when the objective function is nonconvex and composite we prove asymptotic convergence for the sequences generated by our algorithms to stationary points and sublinear rate of convergence in expectation for some optimality measure. Additionally, if the objective function satisfies an error bound condition we derive a local linear rate of convergence for the expected values of the objective function. We also present extensive numerical experiments for evaluating the performance of our algorithms in comparison with state-of-the-art methods.


IEEE Transactions on Automatic Control | 2008

Every Continuous Nonlinear Control System Can be Obtained by Parametric Convex Programming

Michel Baes; Moritz Diehl; Ion Necoara

In this paper, we develop randomized block-coordinate descent methods for minimizing multi-agent convex optimization problems with linearly coupled constraints over networks and prove that they obtain in expectation an ε accurate solution in at most O(1/λ2(Q)ϵ) iterations, where λ2(Q) is the second smallest eigenvalue of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our methods is much simpler than the one of a method based on full gradient information and each iteration can be computed in a completely distributed way. We focus on how to choose the probabilities to make these randomized algorithms to converge as fast as possible and we arrive at solving a sparse SDP. Analysis for rate of convergence in probability is also provided. For strongly convex functions our distributed algorithms converge linearly. We also extend the main algorithm to a more general random coordinate descent method and to problems with more general linearly coupled constraints. Preliminary numerical tests confirm that on very large optimization problems our method is much more numerically efficient than methods based on full gradient.


Siam Journal on Optimization | 2013

An inexact perturbed path-following method for lagrangian decomposition in large-scale separable convex optimization

Quoc Tran Dinh; Ion Necoara; Carlo Savorgnan; Moritz Diehl

Dynamic spectrum management (DSM) has been recognized as a key technology to significantly improve the performance of digital subscriber line (DSL) broadband access networks. The basic concept of DSM is to coordinate transmission over multiple DSL lines so as to mitigate the impact of crosstalk interference amongst them. Many algorithms have been proposed to tackle the nonconvex optimization problems appearing in DSM, many of them relying on a standard subgradient based dual decomposition approach. In practice however, this approach is often found to lead to extremely slow convergence or even no convergence at all, one of the reasons being the very difficult tuning of the stepsize parameters. In this paper we propose a novel improved dual decomposition approach inspired by recent advances in mathematical programming. It uses a smoothing technique for the Lagrangian combined with an optimal gradient based scheme for updating the Lagrange multipliers. The stepsize parameters are furthermore selected optimally removing the need for a tuning strategy. With this approach we show how the convergence of current state-of-the-art DSM algorithms based on iterative convex approximations (SCALE and CA-DSB) can be improved by one order of magnitude. Furthermore we apply the improved dual decomposition approach to other DSM algorithms (OSB, ISB, ASB, (MS)-DSB, and MIW) and propose further improvements to obtain fast and robust DSM algorithms. Finally, we demonstrate the effectiveness of the improved dual decomposition approach for a number of realistic multiuser DSL scenarios.

Collaboration


Dive into the Ion Necoara's collaboration.

Top Co-Authors

Avatar

Andrei Patrascu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Valentin Nedelcu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Dragos Clipici

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioan Dumitrache

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Tamás Keviczky

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Morten Hovd

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

François Glineur

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge