Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Varun Ganapathi is active.

Publication


Featured researches published by Varun Ganapathi.


computer vision and pattern recognition | 2010

Real time motion capture using a single time-of-flight camera

Varun Ganapathi; Christian Plagemann; Daphne Koller; Sebastian Thrun

Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.


international conference on robotics and automation | 2010

Real-time identification and localization of body parts from depth images

Christian Plagemann; Varun Ganapathi; Daphne Koller; Sebastian Thrun

We deal with the problem of detecting and identifying body parts in depth images at video frame rates. Our solution involves a novel interest point detector for mesh and range data that is particularly well suited for analyzing human shape. The interest points, which are based on identifying geodesic extrema on the surface mesh, coincide with salient points of the body, which can be classified as, e.g., hand, foot or head using local shape descriptors. Our approach also provides a natural way of estimating a 3D orientation vector for a given interest point. This can be used to normalize the local shape descriptors to simplify the classification problem as well as to directly estimate the orientation of body parts in space. Experiments involving ground truth labels acquired via an active motion capture system show that our interest points in conjunction with a boosted patch classifier are significantly better in detecting body parts in depth images than state-of-the-art sliding-window based detectors.


international symposium on experimental robotics | 2006

Autonomous Inverted Helicopter Flight via Reinforcement Learning

Andrew Y. Ng; Adam Coates; Mark Diel; Varun Ganapathi; Jamie Schulte; Ben Tse; Eric Berger; Eric Liang

Helicopters have highly stochastic, nonlinear, dynamics, and autonomous helicopter flight is widely regarded to be a challenging control problem. As helicopters are highly unstable at low speeds, it is particularly difficult to design controllers for low speed aerobatic maneuvers. In this paper, we describe a successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter. Using data collected from the helicopter in flight, we began by learning a stochastic, nonlinear model of the helicopter’s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. Finally, the resulting controller was successfully tested on our autonomous helicopter platform.


european conference on computer vision | 2012

Real-time human pose tracking from range data

Varun Ganapathi; Christian Plagemann; Daphne Koller; Sebastian Thrun

Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.


international conference on robotics and automation | 2011

Grasping with application to an autonomous checkout robot

Ellen Klingbeil; Deepak Rao; Blake Carpenter; Varun Ganapathi; Andrew Y. Ng; Oussama Khatib

In this paper, we present a novel grasp selection algorithm to enable a robot with a two-fingered end-effector to autonomously grasp unknown objects. Our approach requires as input only the raw depth data obtained from a single frame of a 3D sensor. Additionally, our approach uses no explicit models of the objects and does not require a training phase. We use the grasping capability to demonstrate the application of a robot as an autonomous checkout clerk. To perform this task, the robot must identify how to grasp an object, locate the barcode on the object and read the numeric code. We evaluate our grasping algorithm in experiments where the robot was required to autonomously grasp unknown objects. The robot achieved a success of 91.6%in grasping novel objects. We performed two sets of experiments to evaluate the checkout robot application. In the first set, the objects were placed in many orientations in front of the robot one at a time. In the second set, the objects were placed several at a time with varying amounts of clutter. The robot was able to autonomously grasp and scan the objects in 49/50 of the single-object trials and 46/50 of the cluttered trials.


neural information processing systems | 2006

Efficient Structure Learning of Markov Networks using L_1-Regularization

Su-In Lee; Varun Ganapathi; Daphne Koller


international symposium on experimental robotics | 2004

Inverted Autonomous Helicopter Flight via Reinforcement Learning

Andrew Y. Ng; Adam Coates; Mark Diel; Varun Ganapathi; Ben Tse; Eric Berger; Eric Liang


neural information processing systems | 2005

Learning vehicular dynamics, with application to modeling helicopters

Pieter Abbeel; Varun Ganapathi; Andrew Y. Ng


uncertainty in artificial intelligence | 2008

Constrained approximate maximum entropy learning of Markov random fields

Varun Ganapathi; David Vickrey; John C. Duchi; Daphne Koller


Archive | 2009

A Dynamic Navigation Guide for Webpages

Jawed Karim; Ioannis Antonellis; Varun Ganapathi; Hector Garcia-Molina

Collaboration


Dive into the Varun Ganapathi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge