Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tucker Hermans is active.

Publication


Featured researches published by Tucker Hermans.


intelligent robots and systems | 2011

Push planning for object placement on cluttered table surfaces

Akansel Cosgun; Tucker Hermans; Victor Emeli; Mike Stilman

We present a novel planning algorithm for the problem of placing objects on a cluttered surface such as a table, counter or floor. The planner (1) selects a placement for the target object and (2) constructs a sequence of manipulation actions that create space for the object. When no continuous space is large enough for direct placement, the planner leverages means-end analysis and dynamic simulation to find a sequence of linear pushes that clears the necessary space. Our heuristic for determining candidate placement poses for the target object is used to guide the manipulation search. We show successful results for our algorithm in simulation.


acm multimedia | 2010

Movie genre classification via scene categorization

Howard Zhou; Tucker Hermans; Asmita V. Karandikar; James M. Rehg

This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features.


british machine vision conference | 2013

An In Depth View of Saliency.

Arridhana Ciptadi; Tucker Hermans; James M. Rehg

Presented at the 24th British Machine Vision Conference (BMVC 2013), 9-13 September 2013, Bristol, UK.


intelligent robots and systems | 2012

Guided pushing for object singulation

Tucker Hermans; James M. Rehg; Aaron F. Bobick

We propose a novel method for a robot to separate and segment objects in a cluttered tabletop environment. The method leverages the fact that external object boundaries produce visible edges within an object cluster. We achieve this singulation of objects by using the robot arm to perform pushing actions specifically selected to test whether particular visible edges correspond to object boundaries. We verify the separation of objects after a push by examining the clusters formed by geometric segmentation of regions residing on the table surface. To avoid explicitly representing and tracking edges across push behaviors we aggregate over all edges in a given orientation by representing the push-history as an orientation histogram. By tracking the history of directions pushed for each object cluster we can build evidence that a cluster cannot be further separated. We present quantitative and qualitative experimental results performed in a real home environment by a mobile manipulator using input from an RGB-D camera mounted on the robots head. We show that our pushing strategy can more reliably obtain singulation in fewer pushes than an approach, that does not explicitly reason about boundary information.


ieee-ras international conference on humanoid robots | 2015

Evaluation of tactile feature extraction for interactive object recognition

Janine Hoelscher; Jan Peters; Tucker Hermans

Tactile sensing stands to improve the manipulation and perception skills of autonomous robots. Object and material recognition stand as two important tasks, where tactile sensing can aid robotics. While much work has been done on showing the applicability of specific sensors to recognition tasks, a comprehensive examination of the features used has not been performed. In this paper we thoroughly examine the different components of performing interactive object recognition with tactile sensing. We use a state-of-the-art multimodal tactile sensor, allowing us to compare features previously presented for a number of different platforms. We examine the statistical features, robot motions, and classification approaches used for performing object and material recognition. We show that by combining simple statistical features captured from five robot motions our robot can reliably differentiate between a diverse set of 49 objects with an average classification accuracy of 97.6 ± 2.12%.


ieee-ras international conference on humanoid robots | 2013

Learning contact locations for pushing and orienting unknown objects

Tucker Hermans; Fuxin Li; James M. Rehg; Aaron F. Bobick

We present a method by which a robot learns to predict effective contact locations for pushing as a function of object shape. The robot performs push experiments at many contact locations on multiple objects and records local and global shape features at each point of contact. Each trial attempts to either push the object in a straight line or to rotate the object to a new orientation. The robot observes the outcome trajectories of the manipulations and computes either a push-stability or rotate-push score for each trial. The robot then learns a regression function for each score in order to predict push effectiveness as a function of object shape. With this mapping, the robot can infer effective push locations for subsequent objects from their shapes, regardless of whether they belong to a previously encountered object class. These results are demonstrated on a mobile manipulator robot pushing a variety of household objects on a tabletop surface.


international conference on robotics and automation | 2013

Decoupling behavior, perception, and control for autonomous learning of affordances

Tucker Hermans; James M. Rehg; Aaron F. Bobick

A novel behavior representation is introduced that permits a robot to systematically explore the best methods by which to successfully execute an affordance-based behavior for a particular object. The approach decomposes affordance-based behaviors into three components. We first define controllers that specify how to achieve a desired change in object state through changes in the agents state. For each controller we develop at least one behavior primitive that determines how the controller outputs translate to specific movements of the agent. Additionally we provide multiple perceptual proxies that define the representation of the object that is to be computed as input to the controller during execution. A variety of proxies may be selected for a given controller and a given proxy may provide input for more than one controller. When developing an appropriate affordance-based behavior strategy for a given object, the robot can systematically vary these elements as well as note the impact of additional task variables such as location in the workspace. We demonstrate the approach using a PR2 robot that explores different combinations of controller, behavior primitive, and proxy to perform a push or pull positioning behavior on a selection of household objects, learning which methods best work for each object.


ieee-ras international conference on humanoid robots | 2015

Learning robot in-hand manipulation with tactile features

Herke van Hoof; Tucker Hermans; Gerhard Neumann; Jan Peters

Dexterous manipulation enables repositioning of objects and tools within a robots hand. When applying dexterous manipulation to unknown objects, exact object models are not available. Instead of relying on models, compliance and tactile feedback can be exploited to adapt to unknown objects. However, compliant hands and tactile sensors add complexity and are themselves difficult to model. Hence, we propose acquiring in-hand manipulation skills through reinforcement learning, which does not require analytic dynamics or kinematics models. In this paper, we show that this approach successfully acquires a tactile manipulation skill using a passively compliant hand. Additionally, we show that the learned tactile skill generalizes to novel objects.


intelligent robots and systems | 2015

Stabilizing novel objects by learning to predict tactile slip

Filipe Veiga; Herke van Hoof; Jan Peters; Tucker Hermans

During grasping and other in-hand manipulation tasks maintaining a stable grip on the object is crucial for the tasks outcome. Inherently connected to grip stability is the concept of slip. Slip occurs when the contact between the fingertip and the object is partially lost, resulting in sudden undesired changes to the objects state. While several approaches for slip detection have been proposed in the literature, they frequently rely on previous knowledge of the manipulated object. This previous knowledge may be unavailable, seeing that robots operating in real-world scenarios often must interact with previously unseen objects. In our work we explore the generalization capabilities of well known supervised learning methods, using random forest classifiers to create generalizable slip predictors. We utilize these classifiers in the feedback loop of an object stabilization controller. We show that the controller can successfully stabilize previously unknown objects by predicting and counteracting slip events.


intelligent robots and systems | 2016

Active tactile object exploration with Gaussian processes

Zhengkun Yi; Roberto Calandra; Filipe Veiga; Herke van Hoof; Tucker Hermans; Yilei Zhang; Jan Peters

Accurate object shape knowledge provides important information for performing stable grasping and dexterous manipulation. When modeling an object using tactile sensors, touching the object surface at a fixed grid of points can be sample inefficient. In this paper, we present an active touch strategy to efficiently reduce the surface geometry uncertainty by leveraging a probabilistic representation of object surface. In particular, we model the object surface using a Gaussian process and use the associated uncertainty information to efficiently determine the next point to explore. We validate the resulting method for tactile object surface modeling using a real robot to reconstruct multiple, complex object surfaces.

Collaboration


Dive into the Tucker Hermans's collaboration.

Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Filipe Veiga

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Herke van Hoof

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Fuxin Li

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge