Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepak R. Karuppiah is active.

Publication


Featured researches published by Deepak R. Karuppiah.


international conference on computer vision systems | 2001

A Fault-Tolerant Distributed Vision System Architecture for Object Tracking in a Smart Room

Deepak R. Karuppiah; Zhigang Zhu; Prashant J. Shenoy; Edward M. Riseman

In recent years, distributed computer vision has gained a lot of attention within the computer vision community for applications such as video surveillance and object tracking. The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the scene. Further, there is no single point of failure, thus the system as a whole could continue to perform the task at hand. However, the advantages arising out of such cooperation can be realized only by timely sharing of the information between them. This paper discusses the design of a distributed vision system that enables several heterogeneous sensors with different processing rates to exchange information in a timely manner in order to achieve a common goal, say tracking of multiple human subjects and mobile robots in an indoor smart environment.In our fault-tolerant distributed vision system, a resource manager manages individual cameras and buffers the time-stamped object candidates received from them. A User Agent with a given task specification approaches the resource manager, first for knowing the available resources (cameras) and later for receiving the object candidates from the resources of its interest. Thus the resource manager acts as a proxy between the user agents and cameras, thereby freeing the cameras to do dedicated feature detection and extraction only. In such a scenario, many failures are possible. For example, one of the cameras may have a hardware failure or it may lose the target, which moved away from its field of view. In this context, important issues such as failure detection and handling, synchronization of data from multiple sensors and sensor reconfiguration by view planning are discussed in the paper. Experimental results with real scene images will be given.


intelligent robots and systems | 2005

Smart resource reconfiguration by exploiting dynamics in perceptual tasks

Deepak R. Karuppiah; Roderic A. Grupen; Allen R. Hanson; Edward M. Riseman

In robot and sensor networks, one of the key challenges is to decide when and where to deploy sensory resources to gather information of optimal value. The problem is essentially one of planning, scheduling and controlling the sensors in the network to acquire data from an environment that is constantly varying. The dynamic nature of the problem precludes the use of traditional rule-based strategies that can handle only quasi-static context changes. Automatic context derivation procedures are thus essential for providing fault recovery and fault pre-emption in such systems. We posit that the quality of a sensor network configuration depends on sensor coverage and geometry, sensor allocation policies, and the dynamic processes in the environment. In this paper, we show how these factors can be manipulated in an adaptive framework for robust run-time resource management. We demonstrate our ideas in a people tracking application using a network of multiple cameras. The task specification for our multi-camera network is one of allocating a camera pair that can best localize a human subject given the current context. The system automatically derives policies for switching between camera pairs that enable robust tracking while being attentive to performance measures. Our approach is unique in that we do not make any a priori assumptions about the scene or the activities that take place in the scene. Models of motion dynamics in the scene and the camera network configuration steer the policies to provide robust tracking.


international conference on robotics and automation | 2004

Cascaded filter approach to multi-objective control

Bryan J. Thibodeau; Stephen Hart; Deepak R. Karuppiah; John Sweeney; Oliver Brock

In this paper we propose a new approach for multi-objective control using a cascade of filters that progressively removes candidate commands which do not satisfy task constraints. The approach is motivated by other control methods that prevent destructive control interactions through null space projections. We apply this approach to a practical leader/follower task in which a mobile robot must address the conflicting objectives of moving to a goal position while avoiding obstacles and keeping a region of the workspace within the field of view of a fixed camera mounted on the platform. We experimentally verify our approach using the Segway Robotic Mobility Platform (RMP), a dynamically stable, differential drive mobile robot.


machine vision applications | 2010

Automatic resource allocation in a distributed camera network

Deepak R. Karuppiah; Roderic A. Grupen; Zhigang Zhu; Allen R. Hanson

In this paper, we present a hierarchical smart resource coordination and reconfiguration framework for distributed systems. We view the coordination problem as one of context aware resource reconfiguration. The fundamental unit in this hierarchy is a Fault Containment Unit (FCU) that provides run-time fault-tolerance by deciding on the best alternative course of action when a failure occurs. FCUs are composed hierarchically and are responsible for dynamically reconfiguring failing FCUs at lower levels. When such a reconfiguration is not possible, FCUs propagate the failure upward for resolution. We evaluate the effectiveness of our framework in a people tracking application using a network of cameras. The task for our multi-camera network is to allocate pairs of cameras that localize a subject optimally given the current run-time context. The system automatically derives policies for switching between camera pairs that enable robust tracking while being attentive to certain performance measures. Our approach is unique in that we model the dynamics in the scene and the camera network configuration steers the policies to provide robust tracking.


IEEE Robotics & Automation Magazine | 2004

Keeping smart, omnidirectional eyes on you [adaptive panoramic stereovision]

Zhigang Zhu; Deepak R. Karuppiah; Edward M. Riseman; Allen R. Hanson

An adaptive panoramic stereo approach for two cooperative mobile platform is presented. There are four key features in the approach: 1) omnidirectional stereovision with an appropriate vertical FOV, and a simple camera calibration method; 2) cooperative mobile platforms for mutual dynamic calibration and best view planning; 3) 3D matching after meaningful object (human subject) extraction; and 4) real-time performance. The integration of omnidirectional vision with mutual awareness and dynamic calibration strategies allows intelligent cooperation between visual agents. This provides an effective way to solve the problems of limited resources, view planning, occlusion, and motion detection of movable robotic platforms. Experiments have shown that this approach is quite promising.


international conference on robotics and automation | 2002

On viewpoint control

S. Uppala; Deepak R. Karuppiah; M. Brewer; Sai Ravela; Roderic A. Grupen

A reactive and concurrent control framework for viewpoint control is developed. The viewpoint control task is decomposed into three control objectives namely; obstacle avoidance, visibility and precision. A moving object is tracked in two panoramic sensors using color, and a scalar uncertainty metric of the object position estimate is introduced. Individual control objectives are accomplished by planning paths using harmonic functions and the task is accomplished using a new subject-to composition operator. It is shown that the system is stable under this composition. The system is demonstrated for tracking a human subject using a fixed panoramic sensor and another panoramic sensor mounted on a mobile platform.


Archive | 2005

Feature Selection Using Adaboost for Face Expression Recognition

Piyanuch Silapachote; Deepak R. Karuppiah; Allen R. Hanson


Lecture Notes in Computer Science | 2001

Software mode changes for continuous motion tracking

Deepak R. Karuppiah; Patrick Deegan; Elizeth Araujo; Yunlei Yang; Gary Holness; Zhigang Zhu; Barbara Staudt Lerner; Roderic A. Grupen; Edward M. Riseman


pervasive computing and communications | 2004

An augmented virtual reality interface for assistive monitoring of smart spaces

Shichao Ou; Deepak R. Karuppiah; Andrew H. Fagg; Edward M. Riseman


Computer Vision and Image Understanding | 2004

Dynamic mutual calibration and view planning for cooperative mobile robots with panoramic virtual stereo vision

Zhigang Zhu; Deepak R. Karuppiah; Edward M. Riseman; Allen R. Hanson

Collaboration


Dive into the Deepak R. Karuppiah's collaboration.

Top Co-Authors

Avatar

Edward M. Riseman

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhigang Zhu

City College of New York

View shared research outputs
Top Co-Authors

Avatar

Roderic A. Grupen

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary Holness

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Oliver Brock

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Barbara Staudt Lerner

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Bryan J. Thibodeau

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

David S. Wheeler

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge