Fabio Zünd
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fabio Zünd.
ACM Transactions on Graphics | 2014
Amit Bermano; Derek Bradley; Thabo Beeler; Fabio Zünd; Derek Nowrouzezahrai; Ilya Baran; Olga Sorkine-Hornung; Hanspeter Pfister; Robert W. Sumner; Bernd Bickel; Markus H. Gross
The facial performance of an individual is inherently rich in subtle deformation and timing details. Although these subtleties make the performance realistic and compelling, they often elude both motion capture and hand animation. We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology. We employ a high-resolution facial performance capture system to acquire a representative performance of an individual in which he or she explores the full range of facial expressiveness. From the captured data, our system extracts an expressiveness model that encodes subtle spatial and temporal deformation details specific to that particular individual. Once this model has been built, these details can be transferred to low-resolution art-directed performances. We demonstrate results on various forms of input; after our enhancement, the resulting animations exhibit the same nuances and fine spatial details as the captured performance, with optional temporal enhancement to match the dynamics of the actor. Finally, we show that our technique outperforms the current state-of-the-art in example-based facial animation.
international conference on computer graphics and interactive techniques | 2015
Fabio Zünd; Mattia Ryffel; Stéphane Magnenat; Alessia Marra; Maurizio Nitti; Mubbasir Kapadia; Gioacchino Noris; Kenny Mitchell; Markus H. Gross; Robert W. Sumner
Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.
interactive 3d graphics and games | 2015
Mubbasir Kapadia; Jessica Falk; Fabio Zünd; Marcel Marti; Robert W. Sumner; Markus H. Gross
This paper explores new authoring paradigms and computer-assisted authoring tools for free-form interactive narratives. We present a new design formalism, Interactive Behavior Trees (IBTs), which decouples the monitoring of user input, the narrative, and how the user may influence the story outcome. We introduce automation tools for IBTs, to help the author detect and automatically resolve inconsistencies in the authored narrative, or conflicting user interactions that may hinder story progression. We compare IBTs to traditional story graph representations and show that our formalism better scales with the number of story arcs, and the degree and granularity of user input. The authoring time is further reduced with the help of automation, and errors are completely avoided. Our approach enables content creators to easily author complex, branching narratives with multiple story arcs in a modular, extensible fashion while empowering players with the agency to freely interact with the characters in the story and the world they inhabit.
IEEE Transactions on Visualization and Computer Graphics | 2015
Stéphane Magnenat; Dat Tien Ngo; Fabio Zünd; Mattia Ryffel; Gioacchino Noris; Gerhard Rothlin; Alessia Marra; Maurizio Nitti; Pascal Fua; Markus H. Gross; Robert W. Sumner
Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the childs coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience.
motion in games | 2014
Fabio Zünd; Marcel Lancelle; Mattia Ryffel; Robert W. Sumner; Kenny Mitchell; Markus H. Gross
We investigate the influence of motion effects in the domain of mobile Augmented Reality (AR) games on user experience and task performance. The work focuses on evaluating responses to a selection of synthesized camera oriented reality mixing techniques for AR, such as motion blur, defocus blur, latency and lighting responsiveness. In our cross section of experiments, we observe that these measures have a significant impact on perceived realism, where aesthetic quality is valued. However, lower latency records the strongest correlation with improved subjective enjoyment, satisfaction, and realism, and objective scoring performance. We conclude that the reality mixing techniques employed are not significant in the overall user experience of a mobile AR game, except where harmonious or convincing blended AR image quality is consciously desired by the participants.
international conference on interactive digital storytelling | 2016
Steven Poulakos; Mubbasir Kapadia; Guido M. Maiga; Fabio Zünd; Markus H. Gross; Robert W. Sumner
In order to use computational intelligence to assist in narrative generation, domain knowledge of the story world must be defined, a task which is currently confined to experts. In an effort to democratize story world creation, we present an accessible graphical platform for content creators and end users to create a story world, populate it with smart characters and objects, and define narrative events that can be used to author digital stories. The system supports reuse to reduce the cost of content production and enables specification of semantics to enable computer assisted authoring. Additionally, we introduce an iterative, bi-directional workflow, which bridges the gap between story world building and story authoring. Users seamlessly transition between authoring stories and refining the story world definition to accommodate their current narrative. A user study demonstrates the efficacy of our system to support narrative generation.
conference on visual media production | 2017
Fabio Zünd; Steven Poulakos; Mubbasir Kapadia; Robert W. Sumner
This paper presents a story version control and graphical visualization framework to enhance collaborative story authoring. We propose a media-agnostic story representation based on story beats, events, and participants that describes the flow of events in a storyline. We develop tree edit distance operations for this representation and use them to build the core features for story version control, including visual diff, conflict detection, and conflict resolution using three-way merge. Our system allows authors to work independently on the same story while providing the ability to automatically synchronize their efforts and resolve conflicts that may arise. We further enhance the collaborative authoring process using visualizations derived from the version control database that visually encode relationships between authors, characters, and story elements, during the evolution of the narrative. We demonstrate the efficacy of our system by integrating it within an existing visual storyboarding tool for authoring animated stories, and additionally use it to collaboratively author stories using video and images. We evaluate the usability of our system though two user studies. Our results reveal that untrained users are able to use and benefit from our system. Additionally, users are able to correctly interpret the graphical visualizations and perceive it to benefit collaboration during the story authoring process.
conference on visual media production | 2015
Fabio Zünd; Pascal Bérard; Alexandre Chapiro; Stefan Schmid; Mattia Ryffel; Markus H. Gross; Amit Bermano; Robert W. Sumner
We propose a hardware and software system that transforms 8-bit side-scrolling console video games into immersive multiplayer experiences. We enhance a classic video game console with custom hardware that time-multiplexes eight gamepad inputs to automatically hand off control from one gamepad to the next. Because control transfers quickly, people at a large event can frequently step in and out of a game and naturally call to their peers to join any time a gamepad is vacant. Video from the game console is captured and processed by a vision algorithm that stitches it into a continuous, expanding panoramic texture, which is displayed in real time on a 360 degree projection system at a large event space. With this system, side-scrolling games unfold across the walls of the room to encircle a large party, giving the feeling that the entire party is taking place inside of the games world. When such a display system is not available, we also provide a virtual reality recreation of the experience. We show results of our system for a number of classic console games tested at a large live event. Results indicate that our work provides a successful recipe to create immersive, multiplayer, interactive experiences that leverage the nostalgic appeal of 8-bit games.
foundations of digital games | 2015
Mubbasir Kapadia; Fabio Zünd; Jessica Falk; Marcel Marti; Robert W. Sumner
international conference on computer graphics and interactive techniques | 2018
Sandro Ropelato; Fabio Zünd; Stéphane Magnenat; Marino Menozzi; Robert W. Sumner