Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. Zoppoli is active.

Publication


Featured researches published by R. Zoppoli.


Automatica | 1995

A receding-horizon regulator for nonlinear systems and a neural approximation

Thomas Parisini; R. Zoppoli

A receding-horizon (RH) optimal control scheme for a discrete-time nonlinear dynamic system is presented. A nonquadratic cost function is considered, and constraints are imposed on both the state and control vectors. Two main contributions are reported. The first consists in deriving a stabilizing regulator by adding a proper terminal penalty function to the process cost. The control vector is generated by means of a feedback control law computed off line instead of computing it on line, as is done for existing RH regulators. The off-line computation is performed by approximating the RH regulator by means of a multilayer feedforward neural network (this is the second contribution of the paper). Bounds to this approximation are established. Simulation results show the effectiveness of the proposed approach.


Journal of Optimization Theory and Applications | 2002

Approximating networks and extended Ritz method for the solution of functional optimization problems

R. Zoppoli; Marcello Sanguineti; Thomas Parisini

Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the structure of linear combinations of basis functions containing free parameters to be optimized (hence, this step can be considered as an extension to the Ritz method, for which fixed basis functions are used). Then, the functional optimization problem can be approximated by nonlinear programming problems. Linear combinations of basis functions are called approximating networks when they benefit from suitable density properties. We term such networks nonlinear (linear) approximating networks if their basis functions contain (do not contain) free parameters. For certain classes of d-variable functions to be approximated, nonlinear approximating networks may require a number of parameters increasing moderately with d, whereas linear approximating networks may be ruled out by the curse of dimensionality. Since the cost functions of the resulting nonlinear programming problems include complex averaging operations, we minimize such functions by stochastic approximation algorithms. As important special cases, we consider stochastic optimal control and estimation problems. Numerical examples show the effectiveness of the method in solving optimization problems stated in high-dimensional settings, involving for instance several tens of state variables.


IEEE Transactions on Automatic Control | 1999

A neural state estimator with bounded errors for nonlinear systems

A. Alessandri; Marco Baglietto; Thomas Parisini; R. Zoppoli

A neural state estimator is described, acting on discrete-time nonlinear systems with noisy measurement channels. A sliding-window quadratic estimation cost function is considered and the measurement noise is assumed to be additive. No probabilistic assumptions are made on the measurement noise nor on the initial state. Novel theoretical convergence results are developed for the error bounds of both the optimal and the neural approximate estimators. To ensure the convergence properties of the neural estimator, a minimax tuning technique is used. The approximate estimator can be designed offline in such a way as to enable it to process on line any possible measure pattern almost instantly.


IEEE Transactions on Neural Networks | 2001

Distributed-information neural control: the case of dynamic routing in traffic networks

Marco Baglietto; Thomas Parisini; R. Zoppoli

Large-scale traffic networks can be modeled as graphs in which a set of nodes are connected through a set of links that cannot be loaded above their traffic capacities. Traffic flows may vary over time. Then the nodes may be requested to modify the traffic flows to be sent to their neighboring nodes. In this case, a dynamic routing problem arises. The decision makers are realistically assumed 1) to generate their routing decisions on the basis of local information and possibly of some data received from other nodes, typically, the neighboring ones and 2) to cooperate on the accomplishment of a common goal, that is, the minimization of the total traffic cost. Therefore, they can be regarded as the cooperating members of informationally distributed organizations, which, in control engineering and economics, are called team organizations. Team optimal control problems cannot be solved analytically unless special assumptions on the team model are verified. In general, this is not the case with traffic networks. An approximate resolutive method is then proposed, in which each decision maker is assigned a fixed-structure routing function where some parameters have to be optimized. Among the various possible fixed-structure functions, feedforward neural networks have been chosen for their powerful approximation capabilities. The routing functions can also be computed (or adapted) locally at each node. Concerning traffic networks, we focus attention on store-and-forward packet switching networks, which exhibit the essential peculiarities and difficulties of other traffic networks. Simulations performed on complex communication networks point out the effectiveness of the proposed method.


IEEE Transactions on Neural Networks | 2012

Feedback Optimal Control of Distributed Parameter Systems by Using Finite-Dimensional Approximation Schemes

Angelo Alessandri; Mauro Gaggero; R. Zoppoli

Optimal control for systems described by partial differential equations is investigated by proposing a methodology to design feedback controllers in approximate form. The approximation stems from constraining the control law to take on a fixed structure, where a finite number of free parameters can be suitably chosen. The original infinite-dimensional optimization problem is then reduced to a mathematical programming one of finite dimension that consists in optimizing the parameters. The solution of such a problem is performed by using sequential quadratic programming. Linear combinations of fixed and parameterized basis functions are used as the structure for the control law, thus giving rise to two different finite-dimensional approximation schemes. The proposed paradigm is general since it allows one to treat problems with distributed and boundary controls within the same approximation framework. It can be applied to systems described by either linear or nonlinear elliptic, parabolic, and hyperbolic equations in arbitrary multidimensional domains. Simulation results obtained in two case studies show the potentials of the proposed approach as compared with dynamic programming.


international workshop on robot motion and control | 2002

Information-based multi-agent exploration

Marco Baglietto; Massimo Paolucci; Luca Scardovi; R. Zoppoli

Deals with the problem of mapping an unknown environment by a team of robots. A discrete grid map of the environment is considered in which each cell is marked as free or not dependent on the possible presence of obstacles. The multi-agent exploration is performed by a team of autonomous robots which can communicate with each other and coordinate their actions. A new information based exploration heuristic that exploits the concepts of both information-gain and frontier is proposed. Experimental results show the effectiveness of the approach.


conference on decision and control | 1999

Neural approximators and team theory for dynamic routing: a receding-horizon approach

Marco Baglietto; Thomas Parisini; R. Zoppoli

The problem of optimal dynamic routing of messages in a store-and-forward packet switching network is addressed by a receding-horizon approach. The nodes of the network must make routing decisions on the basis of local information and possibly of some data, received from other nodes and compute their routing strategies by measuring local variables and exchanging a small amount of data with other nodes. These tasks lead to regard the nodes as the cooperating decision makers of a team organization, and call for a computationally distributed algorithm. The well known impossibility of solving team optimal control problems under general conditions suggest two main approximating assumptions: 1) the team optimal control problem is stated in a receding-horizon framework; and 2) each decision maker acting at a node is assigned a given structure in which a finite number of parameters have to be determined in order to minimize the cost function. This makes it possible to approximate the original functional optimization problem by a nonlinear programming one and to compute off line the routing control strategies.


IEEE Transactions on Automatic Control | 1992

On the existence of stationary optimal receding-horizon strategies for dynamic teams with common past information structures

M. Aicardi; Giuseppe Casalino; Riccardo Minciardi; R. Zoppoli

For a linear-quadratic-Gaussian (LQG) team control problem characterized by a partial nestedness of the information structure, and by the existence of a common past information set previous results have established the existence of a sufficient statistic. Since, even under the assumptions made, the determination of optimal strategies over an infinite control horizon remains a difficult problem, the use of a receding horizon control scheme is considered. Unfortunately, the use of such a scheme within a decentralized control framework still yields, in general, time-varying strategies. A condition for the existence of stationary team optimal receding horizon strategies is provided. >


IEEE Transactions on Automatic Control | 1984

Partially nested information structures with a common past

Giuseppe Casalino; Franco Davoli; Riccardo Minciardi; P. P. Puliafito; R. Zoppoli

A team control problem is considered whose information structure is partially nested and is characterized by the existence of a common past information set shared by the team members after a finite delay. Under LQG assumptions, it is shown that the optimal control strategy can take on a time-invariant recursive form based on suitable sufficient statistics.


conference on decision and control | 1995

Nonlinear stabilization by receding-horizon neural regulators

Thomas Parisini; Marcello Sanguineti; R. Zoppoli

A receding-horizon (RH) optimal control scheme for a discrete-time nonlinear dynamic system is presented. A nonquadratic cost function is considered and constraints are imposed on both the state and control vectors. A stabilizing regulator is derived by adding a proper terminal penalty function to the process cost. The control vector is generated by means of a feedback control law computed off-line instead of computing it online, as is done for existing RH regulators. The off-line computation is performed by approximating the RH regulator by a multilayer feedforward neural network. Bounds to this approximation are established. Algorithms are presented to determine some essential parameters for the design of the neural regulator, i.e., the parameters characterizing the terminal cost function and the number of neural units in the networks implementing the regulator. Simulation results show the effectiveness of the proposed approach.

Collaboration


Dive into the R. Zoppoli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Alessandri

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge