Justus H. Piater
University of Innsbruck
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justus H. Piater.
computer vision and pattern recognition | 2005
Pierre Geurts; Justus H. Piater; Louis Wehenkel
We present a novel, generic image classification method based on a recent machine learning algorithm (ensembles of extremely randomized decision trees). Images are classified using randomly extracted subwindows that are suitably normalized to yield robustness to certain image transformations. Our method is evaluated on four very different, publicly available datasets (COIL-100, ZuBuD, ETH-80, WANG). Our results show that our automatic approach is generic and robust to illumination, scale, and viewpoint changes. An extension of the method is proposed to improve its robustness with respect to rotation changes.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Norbert Krüger; Peter Janssen; Sinan Kalkan; Markus Lappe; Aleš Leonardis; Justus H. Piater; Antonio Jose Rodríguez-Sánchez; Laurenz Wiskott
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in todays mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Robotics and Autonomous Systems | 2001
Jefferson A. Coelho; Justus H. Piater; Roderic A. Grupen
Abstract Properties of the human embodiment — sensorimotor apparatus and neurological structure — participate directly in the growth and development of cognitive processes against enormous worst case complexity. It is our position that relationships between morphology and perception over time lead to increasingly comprehensive models that describe the agent’s relationship to the world. We are applying insight derived from neuroscience, neurology, and developmental psychology to the design of advanced robot architectures. To investigate developmental processes, we have begun to approximate the human sensorimotor configuration and to engage sensory and motor subsystems in developmental sequences. Many such sequences have been documented in studies of infant development, so we intend to bootstrap cognitive structures in robots by emulating some of these growth processes that bear an essential resemblance to the human morphology. In this paper, we will show two related examples in which a humanoid robot determines the models and representations that govern its behavior. The first is a model that captures the dynamics of a haptic exploration of an object with a dextrous robot hand that supports skillful grasping. The second example constructs constellations of visual features to predict relative hand/object postures that lead reliably to haptic utility. The result is a first step in a trajectory toward associative visual-haptic categories that bounds the incremental complexity of each stage of development.
Robotics and Autonomous Systems | 2011
Norbert Krüger; Christopher W. Geib; Justus H. Piater; Ronald P. A. Petrick; Mark Steedman; Florentin Wörgötter; Ales Ude; Tamim Asfour; Dirk Kraft; Damir Omrcen; Alejandro Agostini; Rüdiger Dillmann
Abstract This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive robots, and enumerates a number of critical learning problems in terms of OACs.
Robotics and Autonomous Systems | 2010
Oliver Kroemer; Renaud Detry; Justus H. Piater; Jan Peters
Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasps location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controllers upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshaping the hand depending on the objects geometry. The system was evaluated both in simulation and on a real robot.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009
Renaud Detry; Nicolas Pugeault; Justus H. Piater
We present an object representation framework that encodes probabilistic spatial relations between 3D features and organizes these features in a hierarchy. Features at the bottom of the hierarchy are bound to local 3D descriptors. Higher level features recursively encode probabilistic spatial configurations of more elementary features. The hierarchy is implemented in a Markov network. Detection is carried out by a belief propagation algorithm, which infers the pose of high-level features from local evidence and reinforces local evidence from globally consistent knowledge, effectively producing a likelihood for the pose of the object in the detection scene. We also present a simple learning algorithm that autonomously builds hierarchies from local object descriptors. We explain how to use our framework to estimate the pose of a known object in an unknown scene. Experiments demonstrate the robustness of hierarchies to input noise, viewpoint changes, and occlusions.
international conference on development and learning | 2009
Renaud Detry; Emre Baseski; Mila Popovic; Younes Touati; Norbert Krüger; Oliver Kroemer; Jan Peters; Justus H. Piater
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.
Paladyn: Journal of Behavioral Robotics | 2011
Renaud Detry; Dirk Kraft; Oliver Kroemer; Leon Bodenhagen; Jan Peters; Norbert Krüger; Justus H. Piater
We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.
european conference on computer vision | 2008
Wei Du; Justus H. Piater
This paper presents a novel probabilistic approach to integrating multiple cues in visual tracking. We perform tracking in different cues by interacting processes. Each process is represented by a Hidden Markov Model, and these parallel processes are arranged in a chain topology. The resulting Linked Hidden Markov Models naturally allow the use of particle filters and Belief Propagation in a unified framework. In particular, a target is tracked in each cue by a particle filter, and the particle filters in different cues interact via a message passing scheme. The general framework of our approach allows a customized combination of different cues in different situations, which is desirable from the implementation point of view. Our examples selectively integrate four visual cues including color, edges, motion and contours. We demonstrate empirically that the ordering of the cues is nearly inconsequential, and that our approach is superior to other approaches such as Independent Integration and Hierarchical Integration in terms of flexibility and robustness.
advanced video and signal based surveillance | 2005
Pierre F. Gabriel; Jean-Bernard Hayet; Justus H. Piater; Jacques Verly
This paper presents a new approach for tracking objects in complex situations such as people in a crowd or players on a soccer field. Each object in the image is represented by several interest points (IPs). These IPs are obtained using a color version of the Harris IP detector. Each IP is characterized by the local appearance (chromatic first-order local jet) of the object and by geometric parameters. We track objects by matching IPs from image to image based on the Mahalanobis distance. The approach is robust to occlusion. Performance is illustrated by some examples.