Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Pieropan is active.

Publication


Featured researches published by Alessandro Pieropan.


international conference on robotics and automation | 2013

Functional object descriptors for human activity modeling

Alessandro Pieropan; Carl Henrik Ek; Hedvig Kjellström

The ability to learn from human demonstration is essential for robots in human environments. The activity models that the robot builds from observation must take both the human motion and the objects involved into account. Object models designed for this purpose should reflect the role of the object in the activity - its function, or affordances. The main contribution of this paper is to represent object directly in terms of their interaction with human hands, rather than in terms of appearance. This enables the direct representation of object affordances/function, while being robust to intra-class differences in appearance. Object hypotheses are first extracted from a video sequence as tracks of associated image segments. The object hypotheses are encoded as strings, where the vocabulary corresponds to different types of interaction with human hands. The similarity between two such object descriptors can be measured using a string kernel. Experiments show these functional descriptors to capture differences and similarities in object affordances/function that are not represented by appearance.


intelligent robots and systems | 2014

Audio-Visual Classification and Detection of Human Manipulation Actions

Alessandro Pieropan; Giampiero Salvi; Karl Pauwels; Hedvig Kjellström

Humans are able to merge information from multiple perceptional modalities and formulate a coherent representation of the world. Our thesis is that robots need to do the same in order to operate robustly and autonomously in an unstructured environment. It has also been shown in several fields that multiple sources of information can complement each other, overcoming the limitations of a single perceptual modality. Hence, in this paper we introduce a data set of actions that includes both visual data (RGB-D video and 6DOF object pose estimation) and acoustic data. We also propose a method for recognizing and segmenting actions from continuous audio-visual data. The proposed method is employed for extensive evaluation of the descriptive power of the two modalities, and we discuss how they can be used jointly to infer a coherent interpretation of the recorded action.


robot and human interactive communication | 2014

Unsupervised object exploration using context

Alessandro Pieropan; Hedvig Kjellström

In order for robots to function in unstructured environments in interaction with humans, they must be able to reason about the world in a semantic meaningful way. An essential capability is to segment the world into semantic plausible object hypotheses. In this paper we propose a general framework which can be used for reasoning about objects and their functionality in manipulation activities. Our system employs a hierarchical segmentation framework that extracts object hypotheses from RGB-D video. Motivated by cognitive studies on humans, our work leverages on contextual information, e.g., that objects obey the laws of physics, to formulate object hypotheses from regions in a mathematically principled manner.


international conference on robotics and automation | 2015

Robust 3D tracking of unknown objects

Alessandro Pieropan; Niklas Bergström; Masatoshi Ishikawa; Hedvig Kjellström

Visual tracking of unknown objects is an essential task in robotic perception, of importance to a wide range of applications. In the general scenario, the robot has no full 3D model of the object beforehand, just the partial view of the object visible in the first video frame. A tracker with this information only will inevitably lose track of the object after occlusions or large out-of-plane rotations. The way to overcome this is to incrementally learn the appearances of new views of the object. However, this bootstrapping approach is sensitive to drifting due to occasional inclusion of the background into the model.


Advanced Robotics | 2016

Robust and adaptive keypoint-based object tracking

Alessandro Pieropan; Niklas Bergström; Masatoshi Ishikawa; Hedvig Kjellström

Object tracking is a fundamental ability for a robot; manipulation as well as activity recognition relies on the robot being able to follow objects in the scene. This paper presents a tracker that adapts to changes in object appearance and is able to re-discover an object that was lost. At its core is a keypoint-based method that exploits the rigidity assumption: pairs of keypoints maintain the same relations over similarity transforms. Using a structured approach to learning, it is able to incorporate new appearances in its model for increased robustness. We show through quantitative and qualitative experiments the benefits of the proposed approach compared to the state of the art, even for objects that do not strictly follow the rigidity assumption. Graphical Abstract


ieee-ras international conference on humanoid robots | 2015

Estimating the deformability of elastic materials using optical flow and position-based dynamics

Piiren Giiler; Karl Pauwels; Alessandro Pieropan; Hedvig Kjellström; Danica Kragic

Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.


international conference on robotics and automation | 2016

Robust tracking of unknown objects through adaptive size estimation and appearance learning

Alessandro Pieropan; Niklas Bergström; Masatoshi Ishikawa; Danica Kragic; Hedvig Kjellström

This work employs an adaptive learning mechanism to perform tracking of an unknown object through RGBD cameras. We extend our previous framework to robustly track a wider range of arbitrarily shaped objects by adapting the model to the measured object size. The size is estimated as the object undergoes motion, which is done by fitting an inscribed cuboid to the measurements. The region spanned by this cuboid is used during tracking, to determine whether or not new measurements should be added to the object model. In our experiments we test our tracker with a set of objects of arbitrary shape and we show the benefit of the proposed model due to its ability to adapt to the object shape which leads to more robust tracking results.


arXiv: Computer Vision and Pattern Recognition | 2016

Feature Descriptors for Tracking by Detection: a Benchmark.

Alessandro Pieropan; Mårten Björkman; Niklas Bergström; Danica Kragic


intelligent robots and systems | 2017

Estimating deformability of objects using meshless shape matching

Puren Guler; Alessandro Pieropan; Masatoshi Ishikawa; Danica Kragic


international conference on robotics and automation | 2014

A dataset of human manipulation actions

Alessandro Pieropan; Giampiero Salvi; Karl Pauwels; Hedvig Kjellström

Collaboration


Dive into the Alessandro Pieropan's collaboration.

Top Co-Authors

Avatar

Hedvig Kjellström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Karl Pauwels

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Giampiero Salvi

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mårten Björkman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Piiren Giiler

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Puren Guler

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge