Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nilesh V. Kulkarni is active.

Publication


Featured researches published by Nilesh V. Kulkarni.


Journal of Propulsion and Power | 2005

Performance Optimization of a Magnetohydrodynamic Generator at the Scramjet Inlet

Nilesh V. Kulkarni; Minh Q. Phan

The performance optimization problem of a magnetohydrodynamic (MHD) generator at the inlet of a scramjet engine is studied. The generator performance is characterized in terms of both the net energy extracted from the flow and the flow characteristics through the channel before it enters the combustion chamber of the scramjet engine. The analysis assumes a steady-state one-dimensional flow in which the position along the channel is treated as an independent coordinate. The performance optimization problem can then be handled by optimal control theory, in which position now plays the role of time as in a standard control problem. In this work, an optimal neural-networks-based controller design technique developed in our previous work is used. The technique is well suited for this application, because the MHD system is highly nonlinear. Furthermore, the technique is data-based, in that experimental data obtained from the system can be used to design the optimal controller without requiring an explicit model of the system. Nomenclature A f = cross-sectional area of the channel, m 2 B f


Journal of Guidance Control and Dynamics | 2002

Neural-Network-Based Design of Optimal Controllers for Nonlinear Systems

Nilesh V. Kulkarni; Minh Q. Phan

A neural-network-based methodology for the design of optimal controllers for nonlinear systems is presented. The overall architecture consists of two neural networks. The first neural network is a cost-to-go function approximator (CTGA), which is trained to predict the cost to go from the present state of the system. The second neural network converges to an optimal controller as it is trained to minimize the output of the first network. The CTGA can be trained using available simulation or experimental data. Hence an explicit analytical model of the system is not required. The key to the success of the approach is giving the CTGA a special decentralized structure that makes its training relatively straightforward and its prediction quality carefully controlled. The specific structure eliminates many of the uncertainties often involved in using artificial neural networks for this type of application. Validity of the approach is illustrated for the optimal attitude control of a spacecraft with reaction wheels. I. Introduction A RTIFICIAL neural networks have been investigated extensively in the optimal control of nonlinear systems. Control architectures known as adaptive critic designs (ACD) have been proposed for the optimal control problem. 1−3 ACD are based on the forward dynamic programming approach to optimization. The basic architecture consists of a critic that models the cost-to-go function and a controller. Both structures are parameterized using neural networks. These two functions are trained simultaneously and each of them depends on the fidelity of the other to get trained properly. This could make the training of the overall system particularly challenging. An interesting solution approach has been presented recently, where the critic and the controller are pretrained using linear models over the region of operation of the system and an algebraic training procedure is employed in the initialization of the neural networks. 4 In our previous work, a modified parametric optimization approach was developed to generate optimal controllers in both state feedback form and dynamic output feedback form for linear systems. 5 In the present work, we generalize the approach to nonlinear systems. Parametric optimization imposes the form of the controller in advance and the controller parameters to optimize the performance measure are found. If the controller parameterization is done via neural networks, then given their universal functionapproximating capability, the true optimal controller can in theory be captured. 6,7 One objective of our research is to remove many of the uncertainties associated with training a neural network architecture that results in an optimal controller. The key to the success of our method is to give the neural network a very special structure that permits tight control over its prediction quality during training. The special structure also makes it possible for each subsystem of the overall network to be trained independent of other subsystems. The issue of interdependency among various portions of the overall network during training, encountered in the basic adaptive


AIAA Guidance, Navigation, and Control Conference and Exhibit | 2002

Data-Based Cost-To-Go Design for Optimal Control

Nilesh V. Kulkarni; Minh Q. Phan

An optimal controller design technique is presented in this paper. The regular parametric optimization approach leads to a nonlinear optimization problem even for a linear system. The presented approach converts a nonlinear optimization to a linear one, thus eliminates typical problems associated with nonlinear optimization. The approach relies on direct identification of the cost-to-go function from input- output or input-state data. The optimal controller is then obtained by minimizing this cost-to-go function. The optimal controller can be designed directly from data obtained from a simulation code or from the physical system. The need to perform system identification first to obtain an explicit model of the system in standard form is therefore eliminated. The derived optimal controllers can be either in state feedback form or dynamic output feedback form. Comparison of the proposed data-based optimal controller to the model-based linear quadratic regulator (LQR) is shown in a model aircraft example. This paper is focused on linear systems. Extension of the technique to the nonlinear case is reported in a companion paper.


AIAA 1st Intelligent Systems Technical Conference | 2004

Data-based Adaptive Predictive Control with Application to In-Flight MHD Power Generation

Nilesh V. Kulkarni; Minh Q. Phan

In this work, we design a control approach that combines the best features of model predictive control and adaptive critic designs. We first design a data-based predictive controller for the optimal control of nonlinear systems. Input-state data obtained from the physical system or a simulation code can be used to design the controller. This constitutes a nominal control design based on all the available knowledge of the system. When used online, the nominal control design may have to be adapted for several reasons. The predictive control design provides the initial values of the critic and the controller in the adaptive critic approach. The adaptive critic architecture then adapts the controller and the critic on-line to maximize the performance from the system. Possible application of magneto-hydrodynamics (MHD) in hypersonic systems has generated a lot of interest in the recent years. One of the interesting ideas for making air-breathing hypersonic flight possible is the MHD energy bypass engine concept. An important problem is the performance optimization of the MHD power generator that is a part of the energy bypass engine concept. We illustrate the application of the proposed control approach to this problem using the action dependent heuristic dynamic programming critic architecture.


AIAA Infotech@Aerospace Conference | 2009

Modeling Error Driven Robot Control

Abraham K. Ishihara; Khalid Al-Ali; Tony M. Adami; Nilesh V. Kulkarni; Nhan Nguyen

In this paper, we present a modeling error driven adaptive controller for control of a robot with unknown dynamics. In general, modeling error is not used since the ideal parameters are not known. However, using a feedback linearization approach we show that the modeling error can be obtained by a measured quantity representing the error dynamics under the ideal conditions, that is, the case for which the robot parameters are known a priori. We show that using this approach, the learning dynamics and plant dynamics are effectively decoupled and can then be analyzed separately. We present simulation examples of the 2-link manipulator that illustrates the algorithm.


AIAA Guidance, Navigation and Control Conference and Exhibit | 2008

Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

Nilesh V. Kulkarni; John Kaneshige; Kalmanje Krishnakumar; John J. Burken

This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.


41st AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit | 2005

Data-based predictive combustion control

Nilesh V. Kulkarni; Dawn M. McIntosh; Kalmanje Krishnakumar; George Kopasakis

*† ‡ § An optimal predictive control approach for the suppression of thermo-acoustic instabilities in the combustor chamber is investigated. The approach uses the combustor pressure fluctuations and fuel modulation data for building internal dynamic input-output models and designing the controller. The knowledge of the physical model of the system is not required. This data-based nature of the approach is therefore useful for rapid prototyping to different geometries and new configurations without having to spend time in developing the physical models. The control approach is capable of rejecting unknown periodic disturbances to the system in an implicit manner. This capability is critical in stabilizing the pressure fluctuations in the combustor system, which is characterized by a very high noise to signal ratio. The simulation results illustrating the success of the control approach in stabilizing the pressure fluctuations are presented.


2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning | 2007

Reinforcement-Learning-based Magneto-hydrodynamic Control of Hypersonic Flows

Nilesh V. Kulkarni; Minh Q. Phan

In this work, we design a policy-iteration-based Q-learning approach for on-line optimal control of ionized hypersonic flow at the inlet of a scramjet engine. Magneto-hydrodynamics (MHD) has been recently proposed as a means for flow control in various aerospace problems. This mechanism corresponds to applying external magnetic fields to ionized flows towards achieving desired flow behavior. The applications range from external flow control for producing forces and moments on the air-vehicle to internal flow control designs, which compress and extract electrical energy from the flow. The current work looks at the later problem of internal flow control. The baseline controller and Q-function parameterizations are derived from an off-line mixed predictive-control and dynamic-programming-based design. The nominal optimal neural network Q-function and controller are updated on-line to handle modeling errors in the off-line design. The on-line implementation investigates key concerns regarding the conservativeness of the update methods. Value-iteration-based update methods have been shown to converge in a probabilistic sense. However, simulations results illustrate that realistic implementations of these methods face significant training difficulties, often failing in learning the optimal controller on-line. The present approach, therefore, uses a policy-iteration-based update, which has time-based convergence guarantees. Given the special finite-horizon nature of the problem, three novel on-line update algorithms are proposed. These algorithms incorporate different mix of concepts, which include bootstrapping, and forward and backward dynamic programming update rules. Simulation results illustrate success of the proposed update algorithms in re-optimizing the performance of the MHD generator during system operation


ieee international conference on space mission challenges for information technology | 2006

Adaptive inner-loop rover control

Nilesh V. Kulkarni; Corey Ippolito; Kalmanje Krishnakumar; Khalid Al-Ali

Adaptive control technology is developed for the inner-loop speed and steering control of the MAX Rover. MAX, a CMU developed Rover, is a compact low-cost 4-wheel drive, 4-wheel steer (double Ackerman), with high-clearance agile durable chassis. It is outfitted with sensors and electronics that make it ideally suited for supporting research relevant to intelligent teleoperation, and as a low-cost autonomous robotic test bed and appliance. The control design consists of a feedback linearization based controller with a proportional-integral (PI) feedback that is augmented by an online adaptive neural network. The adaptation law has guaranteed stability properties for safe operation. The control design is retrofit in nature so that it fits below the outer-loop path planning algorithms. Successful hardware implementation of the controller is illustrated for several scenarios consisting of actuator failures and modeling errors in the nominal design


AIAA Guidance, Navigation, and Control Conference and Exhibit | 2006

Dynamic Inversion based Control of a Docking Mechanism

Nilesh V. Kulkarni; Corey Ippolito; Kalmanje Krishnakumar

The problem of position and attitude control of the Stewart platform based docking mechanism is considered motivated by its future application in space missions requiring the autonomous docking capability. The control design is initiated based on the framework of the intelligent flight control architecture being developed at NASA Ames Research Center. In this paper, the baseline position and attitude control system is designed using dynamic inversion with proportional-integral augmentation. The inverse dynamics uses a Newton-Euler formulation that includes the platform dynamics, the dynamics of the individual legs along with viscous friction in the joints. Simulation results are presented using forward dynamics simulated by a commercial physics engine that builds the system as individual elements with appropriate joints and uses constrained numerical integration,

Collaboration


Dive into the Nilesh V. Kulkarni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khalid Al-Ali

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge