Ramachandra J. Sattigeri
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ramachandra J. Sattigeri.
conference on decision and control | 2004
Eric N. Johnson; Anthony J. Calise; Ramachandra J. Sattigeri; Yoko Watanabe; Venkatesh Madyastha
This paper implements several methods for performing vision-based formation flight control of multiple aircraft in the presence of obstacles. No information is communicated between aircraft, and only passive 2-D vision information is available to maintain formation. The methods for formation control rely either on estimating the range from 2-D vision information by using extended Kalman Filters or directly regulating the size of the image subtended by a leader aircraft on the image plane. When the image size is not a reliable measurement, especially at large ranges, we consider the use of bearing-only information. In this case, observability with respect to the relative distance between vehicles is accomplished by the design of a time-dependent formation geometry. To improve the robustness of the estimation process with respect to unknown leader aircraft acceleration, we augment the EKF with an adaptive neural network. 2-D and 3-D simulation results are presented that illustrate the various approaches.
Journal of Guidance Control and Dynamics | 2006
Byoung Soo Kim; Anthony J. Calise; Ramachandra J. Sattigeri
Presented at the AIAA Guidance, Navigation, and Control Conference and Exhibit 21 - 24 August 2006, Keystone, Colorado.
AIAA Guidance, Navigation, and Control Conference and Exhibit | 2003
Ramachandra J. Sattigeri; Anthony J. Calise; Johnny H. Evers
Presented at the AIAA Guidance, Navigation, and Control Conference and Exhibit 11-14 August 2003, Austin, Texas.
Journal of Aerospace Computing Information and Communication | 2004
Ramachandra J. Sattigeri; Anthony J. Calise; Johnny H. Evers
Presented at the AIAA Guidance, Navigation, and Control Conference and Exhibit 16 - 19 August 2004, Providence, Rhode Island.
american control conference | 2005
Anthony J. Calise; Eric N. Johnson; Ramachandra J. Sattigeri; Yoko Watanabe; Venkatesh Madyastha
This paper discusses estimation and guidance strategies for vision-based target tracking. Specific applications include formation control of multiple unmanned aerial vehicles (UAVs) and air-to-air refueling. We assume that no information is communicated between the aircraft, and only passive 2D vision information is available to maintain formation. To improve the robustness of the estimation process with respect to unknown target aircraft acceleration, the nonlinear estimator (EKF) is augmented with an adaptive neural network (NN). The guidance strategy involves augmenting the inverting solution of nonlinear line-of-sight (LOS) range kinematics with the output of an adaptive NN to compensate for target aircraft LOS velocity. Simulation results are presented that illustrate the various approaches.
AIAA Guidance, Navigation, and Control Conference and Exhibit | 2005
Ramachandra J. Sattigeri; Anthony J. Calise; Byoung Soo Kim; Konstantin Y. Volyanskyy; Nakwan Kim
This paper presents an adaptive guidance and control law algorithm for implementation on a pair of Unmanned Aerial Vehicles (UAVs) in a 6 DOF leader-follower formation flight simulation. The objective of the simulation study is to prepare for a flight test involving a pair of UAVs in formation flight where the follower aircraft will be equipped with an onboard camera to estimate the relative distance and orientation to the leader aircraft. The follower guidance law is an adaptive acceleration based guidance law designed for the purpose of tracking a maneuvering leader aircraft. We also discuss the limitations of a preceding version of the guidance algorithm shown in a previous paper. Finally, we discuss the design of an adaptive controller (autopilot) to track the commands from the guidance algorithm. Simulation results for different leader maneuvers are presented and analyzed.
AIAA Guidance, Navigation and Control Conference and Exhibit | 2007
Ramachandra J. Sattigeri; Eric N. Johnson; Anthony J. Calise; Jincheol Ha
This paper presents an approach to vision-based target tracking with a neural network (NN) augmented Kalman filter as the adaptive target state estimator. The vision sensor onboard the follower (tracker) aircraft is a single camera. Real-time image processing implemented in the onboard flight computer is used to derive measurements of relative bearing (azimuth and elevation angles) and the maximum angle subtended by the target aircraft on the image plane. These measurements are used to update the NN augmented Kalman filter. This filter generates estimates of the target aircraft position, velocity and acceleration in inertial 3D space that are used in the guidance and flight control law to guide the follower aircraft relative to the target aircraft. Applications of the presented approach include vision-based autonomous formation flight, pursuit and autonomous aerial refueling. The NN augmenting the Kalman filter estimates the target acceleration and hence provides for robust state estimation in the presence of unmodeled target maneuvers. Vision-in-theloop simulation results obtained in a 6DOF real-time simulation of vision-based autonomous formation flight are presented to illustrate the efficacy of the adaptive target state estimator design.
AIAA Guidance, Navigation, and Control Conference and Exhibit | 2006
Ramachandra J. Sattigeri; Anthony J. Calise
Presented at the AIAA Guidance, Navigation, and Control Conference and Exhibit 21 - 24 August 2006, Keystone, Colorado.
International Journal of Control | 2006
Naira Hovakimyan; Eugene Lavretsky; Anthony J. Calise; Ramachandra J. Sattigeri
A decentralized adaptive output feedback control design method is presented for control of large-scale interconnected systems. It is assumed that all the controllers share prior information about the subsystem reference models. Based on that information, a linear dynamic output feedback compensator and linearly parameterized neural network (NN) are introduced for each subsystem to partially cancel the effect of the interconnections on the tracking performance. Boundedness of error signals is shown through Lyapunovs direct method.
International Journal of Aerospace Engineering | 2011
Suresh K. Kannan; Eric N. Johnson; Yoko Watanabe; Ramachandra J. Sattigeri
This paper presents a summary of a subset of the extensive vision-based tracking methods developed at Georgia Tech. The problem of a follower aircraft tracking an uncooperative leader, using vision information only, is addressed. In all the results presented, a single monocular camera is used as the sole source of information used to maintain formation with the leader. A Kalman filter formulation is provided for the case where image processing may be used to estimate leader motion in the image plane. An additional piece of information, the subtended angle, also available from computer vision algorithm is used to improve range estimation accuracy. In situations where subtended angle information is not available, an optimal trajectory is generated that improves range estimation accuracy. Finally, assumptions on the target acceleration are relaxed by augmenting a Kalman Filter with an adaptive element.