Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joel A. Rosenfeld is active.

Publication


Featured researches published by Joel A. Rosenfeld.


advances in computing and communications | 2014

Decentralized formation control with connectivity maintenance and collision avoidance under limited and intermittent sensing

Teng-Hu Cheng; Zhen Kan; Joel A. Rosenfeld; Warren E. Dixon

A decentralized switched controller is developed for dynamic agents to perform global formation configuration convergence while maintaining network connectivity and avoiding collision within agents and between stationary obstacles, using only local feedback under limited and intermittent sensing. Due to the intermittent sensing, constant position feedback may not be available for agents all the time. Intermittent sensing can also lead to a disconnected network or collisions between agents. Using a navigation function framework, a decentralized switched controller is developed to navigate the agents to the desired positions while ensuring network maintenance and collision avoidance. Simulation results are provided to illustrate the performance of the developed controller.


advances in computing and communications | 2015

State following (StaF) kernel functions for function approximation part II: Adaptive dynamic programming

Rushikesh Kamalapurkar; Joel A. Rosenfeld; Warren E. Dixon

An infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using a state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.


Automatica | 2016

Efficient model-based reinforcement learning for approximate online optimal control

Rushikesh Kamalapurkar; Joel A. Rosenfeld; Warren E. Dixon

An infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using a state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.


advances in computing and communications | 2015

State following (StaF) kernel functions for function approximation Part I: Theory and motivation

Joel A. Rosenfeld; Rushikesh Kamalapurkar; Warren E. Dixon

Unlike traditional methods that aim to approximate a function over a large compact set, a function approximation method is developed in this paper that aims to approximate a function in a small neighborhood of a state that travels within a compact set. The development is based on universal reproducing kernel Hilbert spaces over the n-dimensional Euclidean space. Three theorems are introduced that support the development of this state following (StaF) method. In particular an explicit uniform number of StaF kernel functions can be calculated to ensure good approximation as a state moves through a large compact set. An algorithm for gradient descent is demonstrated where a good approximation of a function can be achieved provided that the algorithm is applied with a high enough frequency.


SIAM Journal on Numerical Analysis | 2017

Approximating the Caputo Fractional Derivative through the Mittag-Leffler Reproducing Kernel Hilbert Space and the Kernelized Adams--Bashforth--Moulton Method

Joel A. Rosenfeld; Warren E. Dixon

This paper introduces techniques for the estimation of solutions to fractional order differential equations (FODEs) and the approximation of a functions Caputo fractional derivative. These techniq...


Archive | 2018

Model-Based Reinforcement Learning for Approximate Optimal Control

Rushikesh Kamalapurkar; Patrick Walters; Joel A. Rosenfeld; Warren E. Dixon

This chapter develops a data-driven implementation of model-based reinforcement learning to solve approximate optimal control problems online under a persistence of excitation-like rank condition. The development is based on the observation that, given a model of the system, reinforcement learning can be implemented by evaluating the Bellman error at any number of desired points in the state-space. In this result, a parametric system model is considered, and a data-driven parameter identifier is developed to compensate for uncertainty in the parameters. Uniformly ultimately bounded regulation of the system states to a neighborhood of the origin, and convergence of the developed policy to a neighborhood of the optimal policy are established using a Lyapunov-based analysis. Simulation results indicate that the developed controller can be implemented to achieve fast online learning without the addition of ad-hoc probing signals as in Chap. 3. The developed model-based reinforcement learning method is extended to solve trajectory tracking problems for uncertain nonlinear systems, and to generate approximate feedback-Nash equilibrium solutions to N-player nonzero-sum differential games.


Archive | 2018

Differential Graphical Games

Rushikesh Kamalapurkar; Patrick Walters; Joel A. Rosenfeld; Warren E. Dixon

This chapter deals with the formulation and online approximate feedback-Nash equilibrium solution of an optimal network formation tracking problem. A relative control error minimization technique is introduced to facilitate the formulation of a feasible infinite-horizon total-cost differential graphical game. A dynamic programming-based feedback-Nash equilibrium solution to the differential graphical game is obtained via the development of a set of coupled Hamilton–Jacobi equations. The developed approximate feedback-Nash equilibrium solution is analyzed using a Lyapunov-based stability analysis to demonstrate ultimately bounded formation tracking in the presence of uncertainties. In addition to control, this chapter also explores applications of differential graphical games to monitoring the behavior of neighboring agents in a network.


Archive | 2018

Approximate Dynamic Programming

Rushikesh Kamalapurkar; Patrick Walters; Joel A. Rosenfeld; Warren E. Dixon

This chapter contains a brief review of dynamic programming in continuous time and space. In particular, traditional dynamic programming algorithms such as policy iteration, value iteration, and actor-critic methods are presented in the context of continuous-time optimal control. The role of the optimal value function as a Lyapunov function is explained to facilitate online closed-loop optimal control. This chapter also highlights the problems and the limitations of existing techniques, thereby motivating the development in this book. The chapter concludes with some historic remarks and a brief classification of the available dynamic programming techniques.


Archive | 2018

Excitation-Based Online Approximate Optimal Control

Rushikesh Kamalapurkar; Patrick Walters; Joel A. Rosenfeld; Warren E. Dixon

In this chapter, online adaptive reinforcement learning-based solutions are developed for infinite-horizon optimal control problems for continuous-time uncertain nonlinear systems. An actor-critic-identifier structure is developed to approximate the solution to the Hamilton–Jacobi–Bellman equation using three neural network structures. The actor and the critic neural networks approximate the optimal control and the optimal value function, respectively, and a robust dynamic neural network identifier asymptotically approximates the uncertain system dynamics. An advantage of the using the actor-critic-identifier architecture is that learning by the actor, critic, and identifier is continuous and concurrent, without requiring knowledge of system drift dynamics. Convergence of the algorithm is analyzed using Lyapunov-based adaptive control methods. A persistence of excitation condition is required to guarantee exponential convergence to a bounded region in the neighborhood of the optimal control and uniformly ultimately bounded stability of the closed-loop system. The developed actor-critic method is extended to solve trajectory tracking problems under the assumption that the system dynamics are completely known. The actor-critic-identifier architecture is also extended to generate approximate feedback-Nash equilibrium solutions to N-player nonzero-sum differential games. Simulation results are provided to demonstrate the performance of the developed actor-critic-identifier method.


Integral Equations and Operator Theory | 2015

Introducing the Polylogarithmic Hardy Space

Joel A. Rosenfeld

In this paper we investigate the reproducing kernel Hilbert space where the polylogarithm appears as the kernel function. This investigation begins with the properties of functions in this space, and here a connection to the classical Hardy space is shown through the Bose–Einstein integral equation. Next we consider function theoretic operators over the polylogarithmic Hardy space, such as multiplication and Toeplitz operators. It is determined that there are only trivial densely defined multiplication operators (and therefore only trivial bounded multipliers) over this space, which makes this space the first for which this has been found to be true. In the case of Toeplitz operators, a connection between a certain subset of these operators and the number theoretic divisor function is established. Finally, the paper concludes with an operator theoretic proof of the divisibility of the divisor function.

Collaboration


Dive into the Joel A. Rosenfeld's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew R. Teel

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge