Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gaurav S. Sukhatme is active.

Publication


Featured researches published by Gaurav S. Sukhatme.


distributed autonomous robotic systems | 2002

Mobile Sensor Network Deployment using Potential Fields: A Distributed, Scalable Solution to the Area Coverage Problem

Andrew Howard; Maja J. Matarić; Gaurav S. Sukhatme

This paper considers the problem of deploying a mobile sensor network in an unknown environment. A mobile sensor network is composed of a distributed collection of nodes, each of which has sensing, computation, communication and locomotion capabilities. Such networks are capable of self-deployment; i.e., starting from some compact initial configuration, the nodes in the network can spread out such that the area ‘covered’ by the network is maximized. In this paper, we present a potential-field-based approach to deployment. The fields are constructed such that each node is repelled by both obstacles and by other nodes, thereby forcing the network to spread itself throughout the environment. The approach is both distributed and scalable.


IEEE Pervasive Computing | 2002

Connecting the physical world with pervasive networks

Deborah Estrin; David E. Culler; Kris Pister; Gaurav S. Sukhatme

This article addresses the challenges and opportunities of instrumenting the physical world with pervasive networks of sensor-rich, embedded computation. The authors present a taxonomy of emerging systems and outline the enabling technological developments.


Autonomous Robots | 2002

An Incremental Self-Deployment Algorithm for Mobile Sensor Networks

Andrew Howard; Maja J. Matarić; Gaurav S. Sukhatme

This paper describes an incremental deployment algorithm for mobile sensor networks. A mobile sensor network is a distributed collection of nodes, each of which has sensing, computation, communication and locomotion capabilities. The algorithm described in this paper will deploy such nodes one-at-a-time into an unknown environment, with each node making use of information gathered by previously deployed nodes to determine its deployment location. The algorithm is designed to maximize network ‘coverage’ while simultaneously ensuring that nodes retain line-of-sight relationships with one another. This latter constraint arises from the need to localize the nodes in an unknown environment: in our previous work on team localization (A. Howard, M.J. Matarić, and G.S. Sukhatme, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, EPFL, Switzerland, 2002; IEEE Transactions on Robotics and Autonomous Systems, 2002) we have shown how nodes can localize themselves by using other nodes as landmarks. This paper describes the incremental deployment algorithm and presents the results from an extensive series of simulation experiments. These experiments serve to both validate the algorithm and illuminate its empirical properties.


international conference on robotics and automation | 2004

Constrained coverage for mobile sensor networks

Sameera Poduri; Gaurav S. Sukhatme

We consider the problem of self-deployment of a mobile sensor network. We are interested in a deployment strategy that maximizes the area coverage of the network with the constraint that each of the nodes has at least K neighbors, where K is a user-specified parameter. We propose an algorithm based on artificial potential fields which is distributed, scalable and does not require a prior map of the environment. Simulations establish that the resulting networks have the required degree with a high probability, are well connected and achieve good coverage. We present analytical results for the coverage achievable by uniform random and symmetrically tiled network configurations and use these to evaluate the performance of our algorithm.


intelligent robots and systems | 2001

Most valuable player: a robot device server for distributed control

Brian P. Gerkey; Richard T. Vaughan; Kasper Stoy; Andrew Howard; Gaurav S. Sukhatme; Maja J. Matarić

Successful distributed sensing and control require data to flow effectively between sensors, processors and actuators on single robots, in groups and across the Internet. We propose a mechanism for achieving this flow that we have found to be powerful and easy to use; we call it Player. Player combines an efficient message protocol with a simple device model. It is implemented as a multithreaded TCP socket server that provides transparent network access to a collection of sensors and actuators, often comprising a robot. The socket abstraction enables platform- and language-independent control of these devices, allowing the system designer to use the best tool for the task at hand Player is freely available from http://robotics.usc.edu/player.


information processing in sensor networks | 2005

Robomote: enabling mobility in sensor networks

Karthik Dantu; Mohammad H. Rahimi; Hardik Shah; Sandeep Babel; Amit Dhariwal; Gaurav S. Sukhatme

Severe energy limitations, and a paucity of computation pose a set of difficult design challenges for sensor networks. Recent progress in two seemingly disparate research areas namely, distributed robotics and low power embedded systems has led to the creation of mobile (or robotic) sensor networks. Autonomous node mobility brings with it its own challenges, but also alleviates some of the traditional problems associated with static sensor networks. We illustrate this by presenting the design of the robomote, a robot platform that functions as a single mobile node in a mobile sensor network. We briefly describe two case studies where the robomote has been used for table top experiments with a mobile sensor network.


international conference on robotics and automation | 2003

Visually guided landing of an unmanned aerial vehicle

Srikanth Saripalli; James F. Montgomery; Gaurav S. Sukhatme

We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad) and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition, and a combination of vision and Global Positioning System for navigation. The helicopter updates its landing target parameters based on vision and uses an onboard behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition, and control algorithms are accurate, robust, and repeatable.


international conference on robotics and automation | 2003

Studying the feasibility of energy harvesting in a mobile sensor network

Mohammed Rahimi; Hardik Shah; Gaurav S. Sukhatme; J. Heideman; Deborah Estrin

We study the feasibility of extending the lifetime of a wireless sensor network by exploiting mobility. In our system, a small percentage of network nodes are autonomously mobile, allowing them to move in search of energy, recharge, and delivery energy to immobile, energy-depleted nodes. We term this approach energy harvesting. We characterize the problem of uneven energy consumption, suggest energy harvesting as a possible solution, and provide a simple analytical framework to evaluate energy consumption and our scheme. Data from initial feasibility experiments using energy harvesting show promising results.


The International Journal of Robotics Research | 2011

Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration

Jonathan Kelly; Gaurav S. Sukhatme

Visual and inertial sensors, in combination, are able to provide accurate motion estimates and are well suited for use in many robot navigation tasks. However, correct data fusion, and hence overall performance, depends on careful calibration of the rigid body transform between the sensors. Obtaining this calibration information is typically difficult and time-consuming, and normally requires additional equipment. In this paper we describe an algorithm, based on the unscented Kalman filter, for self-calibration of the transform between a camera and an inertial measurement unit (IMU). Our formulation rests on a differential geometric analysis of the observability of the camera—IMU system; this analysis shows that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can be recovered from camera and IMU measurements alone. While calibrating the transform we simultaneously localize the IMU and build a map of the surroundings, all without additional hardware or prior knowledge about the environment in which a robot is operating. We present results from simulation studies and from experiments with a monocular camera and a low-cost IMU, which demonstrate accurate estimation of both the calibration parameters and the local scene structure.


international conference on robotics and automation | 2002

Vision-based autonomous landing of an unmanned aerial vehicle

Srikanth Saripalli; James F. Montgomery; Gaurav S. Sukhatme

We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The helicopter is required to navigate from an initial position to a final position in a partially known environment based on GPS and vision, locate a landing target (a helipad of a known shape) and land on it. We use vision for precise target detection and recognition. The helicopter updates its landing target parameters based on vision and uses an on-board behavior-based controller to follow a path to the landing site. We present results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate and repeatable.

Collaboration


Dive into the Gaurav S. Sukhatme's collaboration.

Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David A. Caron

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Carl Oberg

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beth Stauffer

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Amit Dhariwal

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karol Hausman

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge