Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabrizio Pece is active.

Publication


Featured researches published by Fabrizio Pece.


user interface software and technology | 2014

In-air gestures around unmodified mobile devices

Jie Song; Gábor Sörös; Fabrizio Pece; Sean Ryan Fanello; Shahram Izadi; Cem Keskin; Otmar Hilliges

We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.


conference on visual media production | 2011

Towards Moment Imagery: Automatic Cinemagraphs

James Tompkin; Fabrizio Pece; Kartic Subr; Jan Kautz

The imagination of the online photographic community has recently been sparked by so-called cinema graphs: short, seamlessly looping animated GIF images created from video in which only parts of the image move. These cinema graphs capture the dynamics of one particular region in an image for dramatic effect, and provide the creator with control over what part of a moment to capture. We create a cinema graphs authoring tool combining video motion stabilisation, segmentation, interactive motion selection, motion loop detection and selection, and cinema graph rendering. Our work pushes toward the easy and versatile creation of moments that cannot be represented with still imagery.


eurographics | 2011

Adapting standard video codecs for depth streaming

Fabrizio Pece; Jan Kautz; Tim Weyrich

Cameras that can acquire a continuous stream of depth images are now commonly available, for instance the Microsoft Kinect. It may seem that one should be able to stream these depth videos using standard video codecs, such as VP8 or H.264. However, the quality degrades considerably as the compression algorithms are geared towards standard three-channel (8-bit) colour video, whereas depth videos are single-channel but have a higher bit depth.We present a novel encoding scheme that efficiently converts the single-channel depth images to standard 8-bit three-channel images, which can then be streamed using standard codecs. Our encoding scheme ensures that the compression affects the depth values as little as possible. We show results obtained using two common video encoders (VP8 and H.264) as well as the results obtained when using JPEG compression. The results indicate that our encoding scheme performs much better than simpler methods.


human factors in computing systems | 2016

DefSense: Computational Design of Customized Deformable Input Devices

Moritz Bächer; Benjamin Hepp; Fabrizio Pece; Paul G. Kry; Bernd Bickel; Bernhard Thomaszewski; Otmar Hilliges

We present a novel optimization-based algorithm for the design and fabrication of customized, deformable input devices, capable of continuously sensing their deformation. We propose to embed piezoresistive sensing elements into flexible 3D printed objects. These sensing elements are then utilized to recover rich and natural user interactions at runtime. Designing such objects is a challenging and hard problem if attempted manually for all but the simplest geometries and deformations. Our method simultaneously optimizes the internal routing of the sensing elements and computes a mapping from low-level sensor readings to user-specified outputs in order to minimize reconstruction error. We demonstrate the power and flexibility of the approach by designing and fabricating a set of flexible input devices. Our results indicate that the optimization-based design greatly outperforms manual routings in terms of reconstruction accuracy and thus interaction fidelity.


human factors in computing systems | 2015

Joint Estimation of 3D Hand Position and Gestures from Monocular Video for Mobile Interaction

Jie Song; Fabrizio Pece; Gábor Sörös; Marion Koelle; Otmar Hilliges

We present a machine learning technique to recognize gestures and estimate metric depth of hands for 3D interaction, relying only on monocular RGB video input. We aim to enable spatial interaction with small, body-worn devices where rich 3D input is desired but the usage of conventional depth sensors is prohibitive due to their power consumption and size. We propose a hybrid classification-regression approach to learn and predict a mapping of RGB colors to absolute, metric depth in real time. We also classify distinct hand gestures, allowing for a variety of 3D interactions. We demonstrate our technique with three mobile interaction scenarios and evaluate the method quantitatively and qualitatively.


human factors in computing systems | 2013

Panoinserts: mobile spatial teleconferencing

Fabrizio Pece; William Steptoe; Fabian Wanner; Simon J. Julier; Tim Weyrich; Jan Kautz; Anthony Steed

We present PanoInserts: a novel teleconferencing system that uses smartphone cameras to create a surround representation of meeting places. We take a static panoramic image of a location into which we insert live videos from smartphones. We use a combination of marker- and image-based tracking to position the video inserts within the panorama, and transmit this representation to a remote viewer. We conduct a user study comparing our system with fully-panoramic video and conventional webcam video conferencing for two spatial reasoning tasks. Results indicate that our system performs comparably with fully-panoramic video, and better than webcam video conferencing in tasks that require an accurate surrounding representation of the remote space. We discuss the representational properties and usability of varying video presentations, exploring how they are perceived and how they influence users when performing spatial reasoning tasks.


conference on visual media production | 2014

Device effect on panoramic video+context tasks

Fabrizio Pece; James Tompkin; Hanspeter Pfister; Jan Kautz; Christian Theobalt

Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.


In: (pp. pp. 37-40). (2012) | 2012

Simplified User Interface for Architectural Reconstruction

Fabian Wanner; Fabrizio Pece; Jan Kautz

We present a user-driven reconstruction system for the creation of 3D models of buildings from photographs. The structural properties of buildings, such as parallel and repeated elements, are used to allow the user to create efficiently an accurate 3D structure of different building types. An intuitive interface guides the user through the reconstruction process, which uses a set of input images and a 3D point cloud. The system aims to minimise the user input by recognising imprecise interaction and ensuring photo consistency.


human factors in computing systems | 2018

DeepWriting: Making Digital Ink Editable via Deep Generative Modeling

Emre Aksan; Fabrizio Pece; Otmar Hilliges

Digital ink promises to combine the flexibility and aesthetics of handwriting and the ability to process, search and edit digital text. Character recognition converts handwritten text into a digital representation, albeit at the cost of losing personalized appearance due to the technical difficulties of separating the interwoven components of content and style. In this paper, we propose a novel generative neural network architecture that is capable of disentangling style from content and thus making digital ink editable. Our model can synthesize arbitrary text, while giving users control over the visual appearance (style). For example, allowing for style transfer without changing the content, editing of digital ink at the word level and other application scenarios such as spell-checking and correction of handwritten text. We furthermore contribute a new dataset of handwritten text with fine-grained annotations at the character level and report results from an initial user evaluation.


JVRB - Journal of Virtual Reality and Broadcasting | 2013

Bitmap Movement Detection: HDR for Dynamic Scenes

Fabrizio Pece; Jan Kautz

Collaboration


Dive into the Fabrizio Pece's collaboration.

Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Weyrich

University College London

View shared research outputs
Top Co-Authors

Avatar

Anthony Steed

University College London

View shared research outputs
Top Co-Authors

Avatar

Fabian Wanner

University College London

View shared research outputs
Top Co-Authors

Avatar

Shahram Izadi

University College London

View shared research outputs
Top Co-Authors

Avatar

William Steptoe

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge