Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vilas K. Chitrakaran is active.

Publication


Featured researches published by Vilas K. Chitrakaran.


Automatica | 2005

Identification of a moving object's velocity with a fixed camera

Vilas K. Chitrakaran; Darren M. Dawson; Warren E. Dixon; Jian Chen

In this paper, a continuous estimator strategy is utilized to asymptotically identify the six degree-of-freedom velocity of a moving object using a single fixed camera. The design of the estimator is facilitated by the fusion of homography-based techniques with Lyapunov design methods. Similar to the stereo vision paradigm, the proposed estimator utilizes different views of the object from a single camera to calculate 3D information from 2D images. In contrast to some of the previous work in this area, no explicit model is used to describe the movement of the object; rather, the estimator is constructed based on bounds on the objects velocity, acceleration, and jerk.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Design and experimental testing of the OctArm soft robot manipulator

Michael D. Grissom; Vilas K. Chitrakaran; Dustin Dienno; Matthew Csencits; Michael B. Pritts; Bryan A. Jones; William McMahan; Darren M. Dawson; Christopher D. Rahn; Ian D. Walker

This paper describes the development of the octopus biology inspired OctArm series of soft robot manipulators. Each OctArm is constructed using air muscle extensors with three control channels per section that provide two axis bending and extension. Within each section, mesh and plastic coupler constraints prevent extensor buckling. OctArm IV is comprised of four sections connected by endplates, providing twelve degrees of freedom. Performance of OctArm IV is characterized in a lab environment. Using only 4.13 bar of air pressure, the dexterous distal section provides 66% extension and 380° of rotation in less than .5 seconds. OctArm V has three sections and, using 8.27 bar of air pressure, the strong proximal section provides 890 N and 250 N of vertical and transverse load capacity, respectively. In addition to the in-lab testing, OctArm V underwent a series of field trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulator demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. OctArm VI is designed and constructed based on the in-lab performance, and the field testing of its predecessors. Implications for the deployment of soft robots in military environments are discussed.


Automatica | 2007

Navigation function-based visual servo control

Jian Chen; Darren M. Dawson; Warren E. Dixon; Vilas K. Chitrakaran

In this paper, the mapping between the desired camera feature vector and the desired camera pose (i.e., the position and orientation) is investigated to develop a measurable image Jacobian-like matrix. An image-space path planner is then proposed to generate a desired image trajectory based on this measurable image Jacobian-like matrix and an image space navigation function (NF) (i.e., a special potential field function) while satisfying rigid body constraints. An adaptive, homography-based visual servo tracking controller is then developed to navigate the position and orientation of a camera held by the end-effector of a robot manipulator to a goal position and orientation along the desired image-space trajectory while ensuring the target points remain visible (i.e., the target points avoid self-occlusion and remain in the field-of-view (FOV)) under certain technical restrictions. Due to the inherent nonlinear nature of the problem and the lack of depth information from a monocular system, a Lyapunov-based analysis is used to analyze, the path planner and the adaptive controller.


conference on decision and control | 2005

Vision Assisted Autonomous Landing of an Unmanned Aerial Vehicle.

Vilas K. Chitrakaran; Darren M. Dawson; Jian Chen; M. Feemster

In this paper, a strategy for an autonomous landing maneuver for an underactuated, unmanned aerial vehicle (UAV) using position information obtained from a single monocular on-board camera is presented. Although the UAV is underactuated in translational control inputs (i.e., a lift force can only be produced), the proposed controller is shown to achieve globally uniform ultimate boundedness (GUUB) in position regulation error during the landing approach. The proposed vision-based control algorithm is built upon homography-based techniques and Lyapunov design methods.


intelligent robots and systems | 2007

OctArm - A soft robotic manipulator

Srinivas Neppalli; Bryan A. Jones; William McMahan; Vilas K. Chitrakaran; Ian D. Walker; Michael B. Pritts; Matthew A. Csencsits; Christopher D. Rahn; Michael D. Grissom

Summary form only given. Continuum robots are biologically-inspired by the invertebrate organisms such as octopus arms and elephant trunks. These robots with a backbone-less structure offer a superior performance in unstructured and cluttered environments such as collapsed buildings, unknown geographical terrain, holes and tunnels. This video features OctArm, a continuum robot that demonstrates its capabilities in whole arm manipulation, biologically-inspired maneuvering, and grasping. The video also depicts a 3D graphical model of OctArm in that can be rendered in real-time in Matlabs real-time workshop.


southeastcon | 2007

Velocity control for a quad-rotor uav fly-by-camera interface

Andrew Neff; DongBin Lee; Vilas K. Chitrakaran; Darren M. Dawson; Timothy C. Burg

A quad-rotor unmanned aerial vehicle (UAV) and a two-degree-of-freedom (DOF) camera unit are combined to achieve a fully-actuated fly-by-camera positioning system. The flight control interface allows the user to command motions in the camera frame of reference - a natural perspective for surveillance and inspection tasks. A nonlinear velocity controller, derived using Lyapunov stability arguments, produces simultaneous complementary motion of the quad-rotor vehicle and the camera positioning unit. The controller achieves globally uniform ultimate boundedness (GUUB) on all velocity error terms.


american control conference | 2005

Navigation function based visual servo control

Jian Chen; Darren M. Dawson; Warren E. Dixon; Vilas K. Chitrakaran

In this paper, the mapping between the desired camera feature vector and the desired camera pose (i.e., the position and orientation) is investigated to develop a measurable image Jacobian-like matrix. An image-space path planner is then proposed to generate a desired image trajectory based on this measurable image Jacobian-like matrix and an image space navigation function (NF) (i.e., a special potential field function) while satisfying rigid body constraints. An adaptive, homography-based visual servo tracking controller is then developed to navigate the position and orientation of a camera held by the end-effector of a robot manipulator to a goal position and orientation along the desired image-space trajectory while ensuring the target points remain visible (i.e., the target points avoid self-occlusion and remain in the field-of-view (FOV)) under certain technical restrictions. Due to the inherent nonlinear nature of the problem and the lack of depth information from a monocular system, a Lyapunov-based analysis is used to analyze, the path planner and the adaptive controller.


international conference on control applications | 2001

Design and implementation of the Robotic Platform

Markus S. Loffler; Vilas K. Chitrakaran; Darren M. Dawson

This paper describes the design and implementation of the Robotic Platform, an object-oriented development platform for robotic applications. The Robotic Platform includes servo control, trajectory generation, 3D simulation, a graphical user interface, and a math library. As opposed to distributed solutions, the Robotic Platform implements all these components on a single hardware platform (a standard PC), with a single programming language (C++), and on a single operating system (the QNX Real-Time Platform) while guaranteeing deterministic real-time performance. This design leads to an open architecture that is less complex, easier to use, and easier to extend.


Journal of Intelligent and Robotic Systems | 2004

Design and Implementation of the Robotic Platform

Markus S. Loffler; Vilas K. Chitrakaran; Darren M. Dawson

The diversity of robotic research areas along with the complex requirements of hardware and software for robotic systems have always presented a challenge for system developers. Many past robot control platforms were complex, expensive, and not very user friendly. Even though several of the previous platforms were designed to provide an open architecture system, very few of the previous platforms have been reused. To address previous disadvantages, this paper describes the design and implementation of the Robotic Platform, an object-oriented development platform for robotic applications. The Robotic Platform includes hardware interfacing, servo control, trajectory generation, 3D simulation, a graphical user interface, and a math library. As opposed to distributed solutions, the Robotic Platform implements all these components in a homogenous architecture that utilizes a single hardware platform (a standard PC), a single programming language (C++), and a single operating system (the QNX Real-Time Platform) while guaranteeing deterministic real-time performance. This design leads to an open architecture that is less complex, easier to use, and easier to extend. Particularly, the area of multiple cooperating robots benefits from this kind of architecture, since the Robotic Platform achieves a high integration of its components and provides a simple and flexible means of communication. The architecture of the Robotic Platform builds on the following state-of-the-art technologies and general purpose components to further increase simplicity and reliability: (i) PC technology, (ii) the QNX Real-Time Platform, (iii) the Open Inventor library, (iv) object-oriented design, and (v) the QMotor control environment.


american control conference | 2007

Vision-Based Leader/Follower Tracking for Nonholonomic Mobile Robots

Hariprasad Kannan; Vilas K. Chitrakaran; Darren M. Dawson; Timothy C. Burg

This paper presents a strategy for a nonholonomic mobile robot to autonomously follow a target based on vision information from an onboard pan camera unit (PCU). Homography-based techniques are used to obtain relative position and orientation information from the monocular camera images. The proposed kinematic controller, based on the Lyapunov method, achieves uniform ultimately bounded (UUB) tracking.

Collaboration


Dive into the Vilas K. Chitrakaran's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan A. Jones

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher D. Rahn

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael B. Pritts

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Michael D. Grissom

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge