Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fernando Agustin Pazos is active.

Publication


Featured researches published by Fernando Agustin Pazos.


Neurocomputing | 2009

Control Liapunov function design of neural networks that solve convex optimization and variational inequality problems

Fernando Agustin Pazos; Amit Bhaya

This paper presents two neural networks to find the optimal point in convex optimization problems and variational inequality problems, respectively. The domain of the functions that define the problems is a convex set, which is determined by convex inequality constraints and affine equality constraints. The neural networks are based on gradient descent and exact penalization and the convergence analysis is based on a control Liapunov function analysis, since the dynamical system corresponding to each neural network may be viewed as a so-called variable structure closed loop control system.


conference on decision and control | 2009

Control-theoretic design of iterative methods for symmetric linear systems of equations

Amit Bhaya; Pierre-Alexandre Bliman; Fernando Agustin Pazos

Iterative methods for linear systems with a symmetric positive definite coefficient matrix are designed from a control-theoretic viewpoint. In particular, it is shown that a control-theoretic approach loosely based on m-step dead beat control of the error or residual system, with a suitable definition of error norm can be utilized to design new iterative methods that are competitive with the popular Barzilai-Borwein method, that is well known to be an efficient method with low computational cost. Numerical experiments are reported on to confirm the claimed results.


conference on decision and control | 2012

A cooperative conjugate gradient method for linear systems permitting multithread implementation of low complexity

Amit Bhaya; Pierre-Alexandre Bliman; Guilherme Niedu; Fernando Agustin Pazos

This paper proposes a generalization of the conjugate gradient (CG) method used to solve the equation Ax = b for a symmetric positive definite matrix A of large size n. The generalization consists of permitting the scalar control parameters (= stepsizes in gradient and conjugate gradient directions) to be replaced by matrices, so that multiple descent and conjugate directions are updated simultaneously. Implementation involves the use of multiple agents or threads and is referred to as cooperative CG (cCG), in which the cooperation between agents resides in the fact that the calculation of each entry of the control parameter matrix now involves information that comes from the other agents. The multithread implementation is shown to have low worst case complexity equation in exact arithmetic. Numerical experiments, that illustrate the interest of theoretical results, are carried out on a multicore computer.


conference on decision and control | 2010

Cooperative parallel asynchronous computation of the solution of symmetric linear systems

Amit Bhaya; Pierre-Alexandre Bliman; Fernando Agustin Pazos

This paper introduces a new paradigm, called cooperative computation, for the solution of systems of linear equations with symmetric coefficient matrices. The simplest version of the algorithm consists of two agents, each one computing the solution of the whole system, using an iterative method. Infrequent unidirectional communication occurs from one agent to the other, either periodically, or probabilistically, thus characterizing the computation as parallel and asynchronous. Every time one agent communicates its current approximation of the solution to the other, the receiving agent carries out a least squares computation to replace its current value by an affine combination of the current approximations, and the algorithm continues until a stopping criterion is met. Deterministic and probabilistic variants of this algorithm are introduced and shown to be efficient, specifically in relation to the popular Barzilai-Borwein algorithm, particularly for ill-conditioned matrices.


international symposium on neural networks | 2009

Unified control Liapunov function based design of neural networks that aim at global minimization of nonconvex functions

Fernando Agustin Pazos; Amit Bhaya; Eugenius Kaszkurewicz

This paper presents a unified approach to the design of neural networks that aim to minimize scalar nonconvex functions that have continuous first- and second-order derivatives and a unique global minimum. The approach is based on interpreting the function as a controlled object, namely one that has an output (the function value) that has to be driven to its smallest value by suitable manipulation of its inputs: this is achieved by the use of the control Liapunov function (CLF) technique, well known in systems and control theory. This approach leads naturally to the design of second-order differential equations which are the mathematical models of the corresponding implementations as neural networks. Preliminary numerical simulations indicate that, on a small suite of benchmark test problems, a continuous version of the well known conjugate gradient algorithm, designed by the proposed CLF method, has better performance than its competitors, such as the heavy ball with friction method or the more recent dynamic inertial Newton-like method.


international conference on artificial neural networks | 2009

Comparative Study of the CG and HBF ODEs Used in the Global Minimization of Nonconvex Functions

Amit Bhaya; Fernando Agustin Pazos; Eugenius Kaszkurewicz

This paper presents a unified control Liapunov function (CLF) approach to the design of heavy ball with friction (HBF) and conjugate gradient (CG) neural networks that aim to minimize scalar nonconvex functions that have continuous first- and second-order derivatives and a unique global minimum. This approach leads naturally to the design of second-order differential equations which are the mathematical models of the corresponding implementations as neural networks. Preliminary numerical simulations indicate that, on a small suite of benchmark test problems, a continuous version of the well known conjugate gradient algorithm, designed by the proposed CLF method, has better performance than its HBF competitor.


Neurocomputing | 2012

Design of second order neural networks as dynamical control systems that aim to minimize nonconvex scalar functions

Fernando Agustin Pazos; Amit Bhaya; Eugenius Kaszkurewicz

This paper presents a unified way to design neural networks characterized as second order ordinary differential equations (ODEs) that aim to find the global minimum of nonconvex scalar functions. These neural networks, alternatively referred to as continuous time algorithms, are interpreted as dynamical closed loop control systems. The design is based on the control Liapunov function (CLF) method. For nonconvex scalar functions, the goal of these algorithms is to produce trajectories, starting from an arbitrarily chosen initial guess, that do not get stuck in local minima, thereby increasing the chances of converging to the global minimum.


Sba: Controle & Automação Sociedade Brasileira de Automatica | 2003

Controle de robôs manipuladores em modo dual adaptativo/robusto

Fernando Agustin Pazos; Liu Hsu

We consider the trajectory tracking problem for robot manipulators with unknown physical parameters and affected by bounded external disturbances. We present a non linear adaptive tracking controller by dual mode which guarantees exponential transient performance with respect to an arbitrary small residual set, resulting in arbitrary disturbance attenuation. If the disturbances vanish the controller achieves asymptotic tracking of the desired trajectory.


Numerical Algorithms | 2017

Cooperative concurrent asynchronous computation of the solution of symmetric linear systems

Amit Bhaya; Pierre-Alexandre Bliman; Fernando Agustin Pazos

This paper extends the idea of Brezinski’s hybrid acceleration procedure, for the solution of a system of linear equations with a symmetric coefficient matrix of dimension n, to a new context called cooperative computation, involving m agents (m ≪ n), each one concurrently computing the solution of the whole system, using an iterative method. Cooperation occurs between the agents through the communication, periodic or probabilistic, of the estimate of each agent to one randomly chosen agent, thus characterizing the computation as concurrent and asynchronous. Every time one agent receives solution estimates from the others, it carries out a least squares computation, involving a small linear system of dimension m, in order to replace its current solution estimate by an affine combination of the received estimates, and the algorithm continues until a stopping criterion is met. In addition, an autocooperative algorithm, in which estimates are updated using affine combinations of current and past estimates, is also proposed. The proposed algorithms are shown to be efficient for certain matrices, specifically in relation to the popular Barzilai–Borwein algorithm, through numerical experiments.


hybrid intelligent systems | 2014

A control Liapunov function approach to generalized and regularized descent methods for zero finding

Fernando Agustin Pazos; Amit Bhaya

This paper revisits a class of recently proposed so-called invariant manifold methods for zero finding of ill-posed problems, showing that they can be profitably viewed as homotopy methods, in which the homotopy parameter is interpreted as a learning parameter. Moreover, it is shown that the choice of this learning parameter can be made in a natural manner from a control Liapunov function approach CLF. From this viewpoint, maintaining manifold invariance is equivalent to ensuring that the CLF satisfies a certain ordinary differential equation, involving the learning parameter, that allows an estimate of rate of convergence. In order to illustrate this approach, algorithms recently proposed using the invariant manifold approach, are rederived, via CLFs, in a unified manner. Adaptive regularization parameters for solving linear algebraic ill-posed problems were also proposed. This paper also shows that the discretizations of the ODEs to solve the zero finding problem, as well as the different adaptive choices of the regularization parameter, yield iterative methods for linear systems, which are also derived using the Liapunov optimizing control LOC method.

Collaboration


Dive into the Fernando Agustin Pazos's collaboration.

Top Co-Authors

Avatar

Amit Bhaya

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Eugenius Kaszkurewicz

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Guilherme Niedu

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Liu Hsu

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Pierre-Alexandre Bliman

French Institute for Research in Computer Science and Automation

View shared research outputs
Researchain Logo
Decentralizing Knowledge