Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes A. Stork is active.

Publication


Featured researches published by Johannes A. Stork.


international conference on robotics and automation | 2010

People tracking with human motion predictions from social forces

Matthias Luber; Johannes A. Stork; Gian Diego Tipaldi; Kai Oliver Arras

For many tasks in populated environments, robots need to keep track of current and future motion states of people. Most approaches to people tracking make weak assumptions on human motion such as constant velocity or acceleration. But even over a short period, human behavior is more complex and influenced by factors such as the intended goal, other people, objects in the environment, and social rules. This motivates the use of more sophisticated motion models for people tracking especially since humans frequently undergo lengthy occlusion events. In this paper, we consider computational models developed in the cognitive and social science communities that describe individual and collective pedestrian dynamics for tasks such as crowd behavior analysis. In particular, we integrate a model based on a social force concept into a multi-hypothesis target tracker. We show how the refined motion predictions translate into more informed probability distributions over hypotheses and finally into a more robust tracking behavior and better occlusion handling. In experiments in indoor and outdoor environments with data from a laser range finder, the social force model leads to more accurate tracking with up to two times fewer data association errors.


robot and human interactive communication | 2012

Audio-based human activity recognition using Non-Markovian Ensemble Voting

Johannes A. Stork; Luciano Spinello; Jens Silva; Kai Oliver Arras

Human activity recognition is a key component for socially enabled robots to effectively and naturally interact with humans. In this paper we exploit the fact that many human activities produce characteristic sounds from which a robot can infer the corresponding actions. We propose a novel recognition approach called Non-Markovian Ensemble Voting (NEV) able to classify multiple human activities in an online fashion without the need for silence detection or audio stream segmentation. Moreover, the method can deal with activities that are extended over undefined periods in time. In a series of experiments in real reverberant environments, we are able to robustly recognize 22 different sounds that correspond to a number of human activities in a bathroom and kitchen context. Our method outperforms several established classification techniques.


international conference on robotics and automation | 2014

Combinatorial optimization for hierarchical contact-level grasping

Kaiyu Hang; Johannes A. Stork; Florian T. Pokorny; Danica Kragic

We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.


intelligent robots and systems | 2014

Hierarchical Fingertip Space for Multi-fingered Precision Grasping

Kaiyu Hang; Johannes A. Stork; Danica Kragic

Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.


intelligent robots and systems | 2013

Integrated motion and clasp planning with virtual linking

Johannes A. Stork; Florian T. Pokorny; Danica Kragic

In this work, we address the problem of simultaneous clasp and motion planning on unknown objects with holes. Clasping an object enables a rich set of activities such as dragging, toting, pulling and hauling which can be applied to both soft and rigid objects. To this end, we define a virtual linking measure which characterizes the spacial relation between the robot hand and object. The measure utilizes a set of closed curves arising from an approximately shortest basis of the objects first homology group. We define task spaces to perform collision-free motion planing with respect to multiple prioritized objectives using a sampling-based planing method. The approach is tested in simulation using different robot hands and various real-world objects.


ieee-ras international conference on humanoid robots | 2013

A topology-based object representation for clasping, latching and hooking

Johannes A. Stork; Florian T. Pokorny; Danica Kragic

We present a loop-based topological object representation for objects with holes. The representation is used to model object parts suitable for grasping, e.g. handles, and it incorporates local volume information about these. Furthermore, we present a grasp synthesis framework that utilizes this representation for synthesizing caging grasps that are robust under measurement noise. The approach is complementary to a local contact-based force-closure analysis as it depends on global topological features of the object. We perform an extensive evaluation with four robotic hands on synthetic data. Additionally, we provide real world experiments using a Kinect sensor on two robotic platforms: a Schunk dexterous hand attached to a Kuka robot arm as well as a Nao humanoid robot. In the case of the Nao platform, we provide initial experiments showing that our approach can be used to plan whole arm hooking as well as caging grasps involving only one hand.


international conference on robotics and automation | 2017

A Framework for Optimal Grasp Contact Planning

Kaiyu Hang; Johannes A. Stork; Nancy S. Pollard; Danica Kragic

We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions under which minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.


international conference on robotics and automation | 2016

Probabilistic consolidation of grasp experience

Yasemin Bekiroglu; Andreas C. Damianou; Renaud Detry; Johannes A. Stork; Danica Kragic; Carl Henrik Ek

We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.


robot and human interactive communication | 2017

Non-parametric spatial context structure learning for autonomous understanding of human environments

Akshaya Thippur; Johannes A. Stork; Patric Jensfelt

Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.


intelligent robots and systems | 2015

Learning Predictive State Representations for planning

Johannes A. Stork; Carl Henrik Ek; Danica Kragic

Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

Collaboration


Dive into the Johannes A. Stork's collaboration.

Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaiyu Hang

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasemin Bekiroglu

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alejandro Marzinotto

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mia Kokic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge