Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pietro Zanuttigh is active.

Publication


Featured researches published by Pietro Zanuttigh.


international conference on image processing | 2014

Hand gesture recognition with leap motion and kinect devices

Giulio Marin; Fabio Dominio; Pietro Zanuttigh

The recent introduction of novel acquisition devices like the Leap Motion and the Kinect allows to obtain a very informative description of the hand pose that can be exploited for accurate gesture recognition. This paper proposes a novel hand gesture recognition scheme explicitly targeted to Leap Motion data. An ad-hoc feature set based on the positions and orientation of the fingertips is computed and fed into a multi-class SVM classifier in order to recognize the performed gestures. A set of features is also extracted from the depth computed from the Kinect and combined with the Leap Motion ones in order to improve the recognition performance. Experimental results present a comparison between the accuracy that can be obtained from the two devices on a subset of the American Manual Alphabet and show how, by combining the two features sets, it is possible to achieve a very high accuracy in real-time.


Archive | 2012

Time-of-Flight Cameras and Microsoft Kinect(TM)

Carlo Dal Mutto; Pietro Zanuttigh; Guido M. Cortelazzo

Time-of-Flight Cameras and Microsoft Kinect closely examines the technology and general characteristics of time-of-flight range cameras, and outlines the best methods for maximizing the data captured by these devices. This book also analyzes the calibration issues that some end-users may face when using these type of cameras for research, and suggests methods for improving the real-time 3D reconstruction of dynamic and static scenes. Time-of-Flight Cameras and Microsoft Kinect is intended for researchers and advanced-level students as a reference guide for time-of-flight cameras.Practitioners working in a related field will also find the book valuable.


Pattern Recognition Letters | 2014

Combining multiple depth-based descriptors for hand gesture recognition

Fabio Dominio; Mauro Donadeo; Pietro Zanuttigh

The hand is reliably extracted from the scene by jointly using color and depth data.Features extracted from depth data allow a reliable hand gesture recognition.Multiple features capturing different properties of the gestures are combined together.The proposed approach is able to obtain a very high accuracy in real-time. Depth data acquired by current low-cost real-time depth cameras provide a more informative description of the hand pose that can be exploited for gesture recognition purposes. Following this rationale, this paper introduces a novel hand gesture recognition scheme based on depth information. The hand is firstly extracted from the acquired data and divided into palm and finger regions. Then four different sets of feature descriptors are extracted, accounting for different clues like the distances of the fingertips from the hand center and from the palm plane, the curvature of the hand contour and the geometry of the palm region. Finally a multi-class SVM classifier is employed to recognize the performed gestures. Experimental results demonstrate the ability of the proposed scheme to achieve a very high accuracy on both standard datasets and on more complex ones acquired for experimental evaluation. The current implementation is also able to run in real-time.


Multimedia Tools and Applications | 2016

Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

Giulio Marin; Fabio Dominio; Pietro Zanuttigh

Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time.


international conference on multimedia and expo | 2015

Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications

S. Zennaro; Matteo Munaro; Simone Milani; Pietro Zanuttigh; A. Bernardi; Stefano Ghidoni; Emanuele Menegatti

Microsoft Kinect had a key role in the development of consumer depth sensors being the device that brought depth acquisition to the mass market. Despite the success of this sensor, with the introduction of the second generation, Microsoft has completely changed the technology behind the sensor from structured light to Time-Of-Flight. This paper presents a comparison of the data provided by the first and second generation Kinect in order to explain the achievements that have been obtained with the switch of technology. After an accurate analysis of the accuracy of the two sensors under different conditions, two sample applications, i.e., 3D reconstruction and people tracking, are presented and used to compare the performance of the two sensors.


IEEE Journal of Selected Topics in Signal Processing | 2012

Fusion of Geometry and Color Information for Scene Segmentation

Carlo Dal Mutto; Pietro Zanuttigh; Guido M. Cortelazzo

Scene segmentation is a well-known problem in computer vision traditionally tackled by exploiting only the color information from a single scene view. Recent hardware and software developments allow to estimate in real-time scene geometry and open the way for new scene segmentation approaches based on the fusion of both color and depth data. This paper follows this rationale and proposes a novel segmentation scheme where multidimensional vectors are used to jointly represent color and depth data and normalized cuts spectral clustering is applied to them in order to segment the scene. The critical issue of how to balance the two sources of information is solved by an automatic procedure based on an unsupervised metric for the segmentation quality. An extension of the proposed approach based on the exploitation of both images in stereo vision systems is also proposed. Different acquisition setups, like time-of-flight cameras, the Microsoft Kinect device and stereo vision systems have been used for the experimental validation. A comparison of the effectiveness of the different depth imaging systems for segmentation purposes is also presented. Experimental results show how the proposed algorithm outperforms scene segmentation algorithms based on geometry or color data alone and also other approaches that exploit both clues.


IEEE Transactions on Image Processing | 2013

Scalable Coding of Depth Maps With R-D Optimized Embedding

Reji Mathew; David Taubman; Pietro Zanuttigh

Recent work on depth map compression has revealed the importance of incorporating a description of discontinuity boundary geometry into the compression scheme. We propose a novel compression strategy for depth maps that incorporates geometry information while achieving the goals of scalability and embedded representation. Our scheme involves two separate image pyramid structures, one for breakpoints and the other for sub-band samples produced by a breakpoint-adaptive transform. Breakpoints capture geometric attributes, and are amenable to scalable coding. We develop a rate-distortion optimization framework for determining the presence and precision of breakpoints in the pyramid representation. We employ a variation of the EBCOT scheme to produce embedded bit-streams for both the breakpoint and sub-band data. Compared to JPEG 2000, our proposed scheme enables the same the scalability features while achieving substantially improved rate-distortion performance at the higher bit-rate range and comparable performance at the lower rates.


international conference on multimedia and expo | 2011

Efficient depth map compression exploiting segmented color data

Simone Milani; Pietro Zanuttigh; Marco Zamarin; Søren Forchhammer

3D video representations usually associate to each view a depth map with the corresponding geometric information. Many compression schemes have been proposed for multi-view video and for depth data, but the exploitation of the correlation between the two representations to enhance compression performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently accurate for the target bit rate, the surface coefficients are compressed and transmitted. Otherwise, the region is coded using a standard H.264/AVC Intra coder. Experimental results show that the proposed scheme permits to outperformthe standardH.264/AVC Intra codec on depth data and can be effectively included into multi-view plus depth compression schemes.


conference on visual media production | 2009

A Novel Interpolation Scheme for Range Data with Side Information

Valeria Garro; Carlo Dal Mutto; Pietro Zanuttigh; Guido M. Cortelazzo

Time-of-Flight matrix sensors currently available allow for the acquisition of range maps at video rate but usually have a limited resolution. At the same time high resolution color cameras are widely available. This makes highly desirable methods that are able to exploit the combined use of ToF sensors and color cameras to obtain high resolution range maps. This work presents a novel interpolation technique that exploits side information from a standard color camera to increase the resolution of range maps. A segmented version of the high resolution color image is used in order to identify the main objects in the scene, while a novel surface prediction scheme is used to interpolate the available depth samples. Critical issues like the joint calibration of the two devices and the unreliability of the acquired data have also been taken into account with ad- hoc solutions. The performance of the proposed scheme has been verified with both synthetic and real-world data and experimental results have shown how the proposed method allows to obtain a more accurate interpolation with sharper edges if compared with standard approaches.


global communications conference | 2010

Autonomous discovery, localization and recognition of smart objects through WSN and image features

Emanuele Menegatti; Matteo Danieletto; Marco Mina; Alberto Pretto; Andrea Bardella; Stefano Zanconato; Pietro Zanuttigh; Andrea Zanella

This paper presents a framework that enables the interaction of robotic systems and wireless sensor network technologies for discovering, localizing and recognizing a number of smart objects (SO) placed in an unknown environment. Starting with no a priori knowledge of the environment, the robot will progressively build a virtual reconstruction of the surroundings in three phases: first, it discovers the SOs located in the area by using radio communication; second, it performs a rough localization of the SOs by using a range-only SLAM algorithm based on the RSSI-range measurements; third, it refines the SOs localization by comparing the descriptors extracted from the images acquired by the onboard camera with those transmitted by the motes attached to the SOs. Experimental results show how the combined use of the RSSI data and of the image features allows to discover and localize the SOs located in the environment with a good accuracy.

Collaboration


Dive into the Pietro Zanuttigh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Taubman

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge