Greg Foderaro
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Greg Foderaro.
Automatica | 2014
Greg Foderaro; Silvia Ferrari; Thomas A. Wettergren
This paper presents a novel optimal control problem, referred to as distributed optimal control, that is applicable to multiscale dynamical systems comprised of numerous interacting agents. The system performance is represented by an integral cost function of the macroscopic state that is optimized subject to a hyperbolic partial differential equation known as the advection equation. The microscopic control laws are derived from the optimal macroscopic description using a potential function approach. The optimality conditions of the distributed optimal control problem are first derived analytically and, then, demonstrated numerically through a multi-agent trajectory optimization problem.
Advances in Artificial Neural Systems | 2012
Xu Zhang; Greg Foderaro; Craig S. Henriquez; Antonius M. J. VanDongen; Silvia Ferrari
This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, directly, through a weight update rule in which the weight increment can be decided and implemented based on the training equations. However, in several potential applications of adaptive spiking neural networks, including neuroprosthetic devices and CMOS/memristor nanoscale neuromorphic chips, the weights cannot be manipulated directly and, instead, tend to change over time by virtue of the pre- and postsynaptic neural activity. This paper presents an indirect learning method that induces changes in the synaptic weights by modulating spike-timing-dependent plasticity by means of controlled input spike trains. In place of the weights, the algorithmmanipulates the input spike trains used to stimulate the input neurons by determining a sequence of spike timings that minimize a desired objective function and, indirectly, induce the desired synaptic plasticity in the network.
Environmental Science & Technology | 2016
John D. Albertson; Tierney A. Harvey; Greg Foderaro; Pingping Zhu; Xiaochi Zhou; Silvia Ferrari; M. Shahrooz Amin; Mark Modrak; Halley L. Brantley; Eben D. Thoma
This paper addresses the need for surveillance of fugitive methane emissions over broad geographical regions. Most existing techniques suffer from being either extensive (but qualitative) or quantitative (but intensive with poor scalability). A total of two novel advancements are made here. First, a recursive Bayesian method is presented for probabilistically characterizing fugitive point-sources from mobile sensor data. This approach is made possible by a new cross-plume integrated dispersion formulation that overcomes much of the need for time-averaging concentration data. The method is tested here against a limited data set of controlled methane release and shown to perform well. We then present an information-theoretic approach to plan the paths of the sensor-equipped vehicle, where the path is chosen so as to maximize expected reduction in integrated target source rate uncertainty in the region, subject to given starting and ending positions and prevailing meteorological conditions. The information-driven sensor path planning algorithm is tested and shown to provide robust results across a wide range of conditions. An overall system concept is presented for optionally piggybacking of these techniques onto normal industry maintenance operations using sensor-equipped work trucks.
computational intelligence and games | 2012
Greg Foderaro; Ashleigh Swingler; Silvia Ferrari
This paper presents an on-line approach for optimizing paths for a pursuit-evasion problem, in which an agent must visit several target positions within an environment while simultaneously avoiding one or more actively-pursuing adversaries. This problem is found in a variety of fields, such as robotic path planning, mobile-sensor applications, and path exposure. The methodology developed utilizes cell decomposition to construct a modified decision tree, which balances the reward associated with visiting target locations and the risk of capture by the adversaries. By computing paths on-line, the algorithm can quickly adapt to unexpected adversary behaviors and dynamic environments. The methodology developed in this paper is implemented as a controller for an artificial player in the Ms. Pac-Man arcade games and is entered into the IEEE CIG 2012 screen capture Ms. Pac-Man competition. The approach presented achieved a high score of 44,630 points.
international conference on robotics and automation | 2010
Silvia Ferrari; Greg Foderaro
A novel artificial-potential approach is presented for planning the minimum-exposure paths of multiple vehicles in a dynamic environment containing multiple mobile sensors, and multiple fixed obstacles. This approach presents several advantages over existing techniques, such as the ability of computing multiple minimum-exposure paths online, while avoiding mutual collisions, as well as collisions with obstacles sensed during the motion. Other important advantages include the ability of utilizing heterogenous sensor models, and of meeting multiple objectives, such as minimizing power required, and reaching a set of goal configurations. The approach is demonstrated through numerical simulations involving autonomous underwater vehicles (AUVs) deployed in a region of interest near the New Jersey coast, with ocean currents simulated using real coastal ocean dynamics applications radar (CODAR) data.
conference on decision and control | 2010
Greg Foderaro; Craig S. Henriquez; Silvia Ferrari
Recently, spiking neural networks (SNNs) have been shown capable of approximating the dynamics of biological neuronal networks, and of being trainable by biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity. Numerical simulations also support the possibility that they may possess universal function approximation abilities. However the effectiveness of training algorithms to date is far inferior to those of other artificial neural networks. Moreover, they rely on directly manipulating the SNN weights, which may not be feasible in a number of their potential applications. This paper presents a novel indirect training approach to modulate spike-timing-dependent plasticity (STDP) in an action SNN that serves as a flight controller without directly manipulating its weights. A critic SNN is directly trained with a reward-based Hebbian approach to send spike trains to the action SNN, which in turn controls the aircraft and learns via STDP. The approach is demonstrated by training the action SNN to act as a flight controller for stability augmentation. Its performance and dynamics are analyzed before and after training through numerical simulations and Poincaré maps.
IEEE Control Systems Magazine | 2016
Silvia Ferrari; Greg Foderaro; Pingping Zhu; Thomas A. Wettergren
Many complex systems, ranging from renewable resources [1] to very-large-scale robotic systems (VLRS) [2], can be described as multiscale dynamical systems comprising many interactive agents. In recent years, significant progress has been made in the formation control and stability analysis of teams of agents, such as robots, or autonomous vehicles. In these systems, the mutual goals of the agents are, for example, to maintain a desired configuration, such as a triangle or a star formation, or to perform a desired behavior, such as translating as a group (schooling) or maintaining the center of mass of the group (flocking) [2]-[7]. While this literature has successfully illustrated that the behavior of large networks of interacting agents can be conveniently described and controlled by density functions, it has yet to provide an approach for optimizing the agent density functions such that their mutual goals are optimized.
conference on decision and control | 2013
Keith Rudd; Greg Foderaro; Silvia Ferrari
This paper considers the problem of computing optimal state and control trajectories for a multiscale dynamical system comprised of many interacting dynamical systems, or agents. A generalized reduced gradient (GRG) approach is presented for distributed optimal control (DOC) problems in which the agent dynamics are described by a small system of stochastic differential equations (SDEs). A new set of optimality conditions is derived using calculus of variations, and used to compute the optimal macroscopic state and microscopic control laws. An indirect GRG approach is used to solve the optimality conditions numerically for large systems of agents. By assuming a parametric control law obtained from the superposition of linear basis functions, the agent control laws can be determined via set-point regulation, such that the macroscopic behavior of the agents is optimized over time, based on multiple, interactive navigation objectives.
IEEE Transactions on Control of Network Systems | 2018
Greg Foderaro; Pingping Zhu; Hongchuan Wei; Thomas A. Wettergren; Silvia Ferrari
This paper presents a distributed optimal control approach for managing omnidirectional sensor networks deployed to cooperatively track moving targets in a region of interest. Several authors have shown that under proper assumptions, the performance of mobile sensors is a function of the sensor distribution. In particular, the probability of cooperative track detection, also known as track coverage, can be shown to be an integral function of a probability density function representing the macroscopic sensor network state. Thus, a mobile sensor network deployed to detect moving targets can be viewed as a multiscale dynamical system in which a time-varying probability density function can be identified as a restriction operator, and optimized subject to macroscopic dynamics represented by the advection equation. Simulation results show that the distributed control approach is capable of planning the motion of hundreds of cooperative sensors, such that their effectiveness is significantly increased compared to that of existing uniform, grid, random, and stochastic gradient methods.
international symposium on intelligent control | 2011
Greg Foderaro; Vikram Raju; Silvia Ferrari
This paper presents an approach for optimizing paths online for a pursuit-evasion problem where an agent must visit several target positions within a region of interest while simultaneously avoiding one or more actively-pursuing adversaries. This is relevant to applications such as robotic path planning, mobile-sensor applications, and path exposure. The methodology described utilizes cell decomposition to construct a modified decision tree to achieve the objective of minimizing the risk of being caught by an adversary and maximizing a reward associated with visiting the target locations. By computing paths online, the algorithm can quickly adapt to unexpected movements by the adversaries or dynamic environments. The approach is illustrated through a modified version of the video game Ms. Pac-Man which is shown to be a benchmark example of the pursuit-evasion problem. The results show that the approach presented in this paper runs in real-time and outperforms several other methods as well as most human players.