Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mauro Forti is active.

Publication


Featured researches published by Mauro Forti.


IEEE Transactions on Circuits and Systems I-regular Papers | 2003

Global convergence of neural networks with discontinuous neuron activations

Mauro Forti; Paolo Nistri

The paper introduces a general class of neural networks where the neuron activations are modeled by discontinuous functions. The neural networks have an additive interconnecting structure and they include as particular cases the Hopfield neural networks (HNNs), and the standard cellular neural networks (CNNs), in the limiting situation where the HNNs and CNNs possess neurons with infinite gain. Conditions are derived which ensure the existence of a unique equilibrium point, and a unique output equilibrium point, which are globally attractive for the state and the output trajectories of the neural network, respectively. These conditions, which are applicable to general nonsymmetric neural networks, are based on the concept of Lyapunov diagonally-stable neuron interconnection matrices, and they can be thought of as a generalization to the discontinuous case of previous results established for neural networks possessing smooth neuron activations. Moreover, by suitably exploiting the presence of sliding modes, entirely new conditions are obtained which ensure global convergence in finite time, where the convergence time can be easily estimated on the basis of the relevant neural-network parameters. The analysis in the paper employs results from the theory of differential equations with discontinuous right-hand side as introduced by Filippov. In particular, global convergence is addressed by using a Lyapunov-like approach based on the concept of monotone trajectories of a differential inclusion.


IEEE Transactions on Neural Networks | 2005

Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain

Mauro Forti; Paolo Nistri; Duccio Papini

This paper introduces a general class of neural networks with arbitrary constant delays in the neuron interconnections, and neuron activations belonging to the set of discontinuous monotone increasing and (possibly) unbounded functions. The discontinuities in the activations are an ideal model of the situation where the gain of the neuron amplifiers is very high and tends to infinity, while the delay accounts for the finite switching speed of the neuron amplifiers, or the finite signal propagation speed. It is known that the delay in combination with high-gain nonlinearities is a particularly harmful source of potential instability. The goal of this paper is to single out a subclass of the considered discontinuous neural networks for which stability is instead insensitive to the presence of a delay. More precisely, conditions are given under which there is a unique equilibrium point of the neural network, which is globally exponentially stable for the states, with a known convergence rate. The conditions are easily testable and independent of the delay. Moreover, global convergence in finite time of the state and output is investigated. In doing so, new interesting dynamical phenomena are highlighted with respect to the case without delay, which make the study of convergence in finite time significantly more difficult. The obtained results extend previous work on global stability of delayed neural networks with Lipschitz continuous neuron activations, and neural networks with discontinuous neuron activations but without delays.


IEEE Transactions on Circuits and Systems I-regular Papers | 1994

Necessary and sufficient condition for absolute stability of neural networks

Mauro Forti; Stefano Manetti; Mauro Marini

The main result in this paper is that for a neural circuit of the Hopfield type with a symmetric connection matrix T, the negative semidefiniteness of T is a necessary and sufficient condition for Absolute Stability. The most significant theoretical implication is that the class of neural circuits with a negative semidefinite T is the largest class of circuits that can be employed for embedding and solving optimization problems without the risk of spurious responses. >


IEEE Transactions on Circuits and Systems I-regular Papers | 2004

Generalized neural network for nonsmooth nonlinear programming problems

Mauro Forti; Paolo Nistri; Marc Quincampoix

In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarkes generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.


IEEE Transactions on Neural Networks | 2006

Convergence of Neural Networks for Programming Problems via a Nonsmooth Lojasiewicz Inequality

Mauro Forti; Paolo Nistri; Marc Quincampoix

This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lstrokojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lstrokojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: 1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; 2) for convex QP problems, the same result as in 1) holds; moreover, the singleton belongs to the set of global minimizers; and 3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points


IEEE Transactions on Circuits and Systems | 1991

On a class of nonsymmetrical neural networks with application to ADC

G. Avitabile; Mauro Forti; Stefano Manetti; Mauro Marini

For nonzero initial conditions a neural network may stop in a spurious state-that is, in an equilibrium point that does not correspond to the correct digital representation of the input signal. A method based on a particular class of nonsymmetrical neural networks is proposed for eliminating the problem of stopping in spurious states. The dynamical behavior of these structures is studied to prove that they are characterized by a unique equilibrium point which is globally attractive-that is, the system will converge toward this point for every choice of initial conditions and for every choice of (continuous) nonlinearities. The explicit expression obtained for the unique equilibrium point permits one to design the connection strengths between neurons so that the equilibrium coincides with the desired output for a given input signal. The proposed design procedure is applied to the classical example of A/D conversion, showing that this A/D converter structure has no spurious states. The A/D was simulated using SPICE, and experimental results obtained with a discrete component prototype of the converter are presented. >


IEEE Transactions on Circuits and Systems I-regular Papers | 1992

A condition for global convergence of a class of symmetric neural circuits

Mauro Forti; Stefano Manetti; Mauro Marini

A sufficient condition is proved guaranteeing that a class of neural circuits that includes the Hopfield model as a special case is globally convergent towards a unique stable equilibrium. The condition only requires symmetry and negative semi-definiteness of the neuron connection matrix T and is extremely simple to check and apply in practice. The consequences of the above result are discussed in the context of neural circuits for optimization of quadratic cost functions. >


IEEE Transactions on Neural Networks | 2002

Some extensions of a new method to analyze complete stability of neural networks

Mauro Forti

In a recent work, a new method has been introduced to analyze complete stability of the standard symmetric cellular neural networks (CNNs), which are characterized by local interconnections and neuron activations modeled by a three-segment piecewise-linear (PWL) function. By complete stability it is meant that each trajectory of the neural network converges toward an equilibrium point. The goal of this paper is to extend that method in order to address complete stability of the much wider class of symmetric neural networks with an additive interconnecting structure where the neuron activations are general PWL functions with an arbitrary number of straight segments. The main result obtained is that complete stability holds for any choice of the parameters within the class of symmetric additive neural networks with PWL neuron activations, i.e., such a class of neural networks enjoys the important property of absolute stability of global pattern formation. It is worth pointing out that complete stability is proved for generic situations where the neural network has finitely many (isolated) equilibrium points, as well as for degenerate situations where there are infinite (nonisolated) equilibrium points. The extension in this paper is of practical importance since it includes neural networks useful to solve significant signal processing tasks (e.g., neural networks with multilevel neuron activations). It is of theoretical interest too, due to the possibility of approximating any continuous function (e.g., a sigmoidal function), using PWL functions. The results in this paper confirm the advantages of the method of Forti and Tesi, with respect to LaSalle approach, to address complete stability of PWL neural networks.


International Journal of Bifurcation and Chaos | 2001

A NEW METHOD TO ANALYZE COMPLETE STABILITY OF PWL CELLULAR NEURAL NETWORKS

Mauro Forti; Alberto Tesi

In recent years, the standard Cellular Neural Networks (CNNs) introduced by Chua and Yang [1988] have been one of the most investigated paradigms for neural information processing. In a wide range of applications, the CNNs are required to be completely stable, i.e. each trajectory should converge toward some stationary state. However, a rigorous proof of complete stability, even in the simplest original setting of piecewise-linear (PWL) neuron activations and symmetric interconnections [Chua & Yang, 1988], is still lacking. This paper aims primarily at filling this gap, in order to give a sound analytical foundation to the CNN paradigm. To this end, a novel approach for studying complete stability is proposed. This is based on a fundamental limit theorem for the length of the CNN trajectories. The method differs substantially from the classic approach using LaSalle invariance principle, and permits to overcome difficulties encountered when using LaSalle approach to analyze complete stability of PWL CNNs. The main result obtained, is that a symmetric PWL CNN is completely stable for any choice of the network parameters, i.e. it possesses the Absolute Stability property of global pattern formation. This result is really general and shows that complete stability holds under hypotheses weaker than those considered in [Chua & Yang, 1988]. The result does not require, for example, that the CNN has binary stable equilibrium points only. It is valid even in degenerate situations where the CNN has infinite nonisolated equilibrium points. These features significantly extend the potential application fields of the standard CNNs.


IEEE Transactions on Circuits and Systems I-regular Papers | 1999

Cellular neural network approach to a class of communication problems

Romano Fantacci; Mauro Forti; Mauro Marini; Luca Pancani

In this paper we discuss the design of a cellular neural network (CNN) to solve a class of optimization problems of importance for communication networks. The CNN optimization capabilities are exploited to implement an efficient cell scheduling algorithm in a fast packet switching fabric. The neural-based switching fabric maximizes the cell throughput and, at the same time, it is able to meet a variety of quality of service (QoS) requirements by optimizing a suitable function of the switching delay and priority of the cells. We also show that the CNN approach has advantages with respect to that based on Hopfield neural networks (HNNs) to solve the considered class of optimization problems. In particular, we exploit existing techniques to design CNNs with a prescribed set of stable binary equilibrium points as a basic tool to suppress spurious responses and, hence to optimize the neural switching fabric performance.

Collaboration


Dive into the Mauro Forti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge