Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Floraine Berthouzoz is active.

Publication


Featured researches published by Floraine Berthouzoz.


Communications of The ACM | 2011

Design principles for visual communication

Maneesh Agrawala; Wilmot Li; Floraine Berthouzoz

How to identify, instantiate, and evaluate domain-specific design principles for creating more effective visualizations.


user interface software and technology | 2013

Content-based tools for editing audio stories

Steve Rubin; Floraine Berthouzoz; Gautham J. Mysore; Wilmot Li; Maneesh Agrawala

Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.


ACM Transactions on Graphics | 2011

A Framework for content-adaptive photo manipulation macros: Application to face, landscape, and global manipulations

Floraine Berthouzoz; Wilmot Li; Mira Dontcheva; Maneesh Agrawala

We present a framework for generating content-adaptive macros that can transfer complex photo manipulations to new target images. We demonstrate applications of our framework to face, landscape, and global manipulations. To create a content-adaptive macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to learn the dependencies between image features and the parameters of each selection, brush stroke, and image processing operation in the macro. Although our approach is limited to learning manipulations where there is a direct dependency between image features and operation parameters, we show that our framework is able to learn a large class of the most commonly used manipulations using as few as 20 training demonstrations. Our framework also provides interactive controls to help macro authors and users generate training demonstrations and correct errors due to incorrect labeling or poor parameter estimation. We ask viewers to compare images generated using our content-adaptive macros with and without corrections to manually generated ground-truth images and find that they consistently rate both our automatic and corrected results as close in appearance to the ground truth. We also evaluate the utility of our proposed macro generation workflow via a small informal lab study with professional photographers. The study suggests that our workflow is effective and practical in the context of real-world photo editing.


international conference on computer graphics and interactive techniques | 2013

Parsing sewing patterns into 3D garments

Floraine Berthouzoz; Akash Garg; Danny M. Kaufman; Eitan Grinspun; Maneesh Agrawala

We present techniques for automatically parsing existing sewing patterns and converting them into 3D garment models. Our parser takes a sewing pattern in PDF format as input and starts by extracting the set of panels and styling elements (e.g. darts, pleats and hemlines) contained in the pattern. It then applies a combination of machine learning and integer programming to infer how the panels must be stitched together to form the garment. Our system includes an interactive garment simulator that takes the parsed result and generates the corresponding 3D model. Our fully automatic approach correctly parses 68% of the sewing patterns in our collection. Most of the remaining patterns contain only a few errors that can be quickly corrected within the garment simulator. Finally we present two applications that take advantage of our collection of parsed sewing patterns. Our garment hybrids application lets users smoothly interpolate multiple garments in the 2D space of patterns. Our sketch-based search application allows users to navigate the pattern collection by drawing the shape of panels.


ACM Transactions on Graphics | 2012

Resolution enhancement by vibrating displays

Floraine Berthouzoz; Raanan Fattal

We present a method that makes use of the retinal integration time in the human visual system for increasing the resolution of displays. Given an input image with a resolution higher than the display resolution, we compute several images that match the displays native resolution. We then render these low-resolution images in a sequence that repeats itself on a high refresh-rate display. The period of the sequence falls below the retinal integration time and therefore the eye integrates the images temporally and perceives them as one image. In order to achieve resolution enhancement we apply small-amplitude vibrations to the display panel and synchronize them with the screen refresh cycles. We derive the perceived image model and use it to compute the low-resolution images that are optimized to enhance the apparent resolution of the perceived image. This approach achieves resolution enhancement without having to move the displayed content across the screen and hence offers a more practical solution than existing approaches. Moreover, we use our model to establish limitations on the amount of resolution enhancement achievable by such display systems. In this analysis we draw a formal connection between our display and super-resolution techniques and find that both methods share the same limitation, yet this limitation stems from different sources. Finally, we describe in detail a simple physical realization of our display system and demonstrate its ability to match most of the spectrum displayable on a screen with twice the resolution.


international conference on computer graphics and interactive techniques | 2015

An interactive tool for designing quadrotor camera shots

Niels Joubert; Mike Roberts; Anh Truong; Floraine Berthouzoz; Pat Hanrahan

Cameras attached to small quadrotor aircraft are rapidly becoming a ubiquitous tool for cinematographers, enabling dynamic camera movements through 3D environments. Currently, professionals use these cameras by flying quadrotors manually, a process which requires much skill and dexterity. In this paper, we investigate the needs of quadrotor cinematographers, and build a tool to support video capture using quadrotor-based camera systems. We begin by conducting semi-structured interviews with professional photographers and videographers, from which we extract a set of design principles. We present a tool based on these principles for designing and autonomously executing quadrotor-based camera shots. Our tool enables users to: (1) specify shots visually using keyframes; (2) preview the resulting shots in a virtual environment; (3) precisely control the timing of shots using easing curves; and (4) capture the resulting shots in the real world with a single button click using commercially available quadrotors. We evaluate our tool in a user study with novice and expert cinematographers. We show that our tool makes it possible for novices and experts to design compelling and challenging shots, and capture them fully autonomously.


international conference on computer graphics and interactive techniques | 2016

Physics-driven pattern adjustment for direct 3D garment editing

Aric Bartle; Alla Sheffer; Vladimir G. Kim; Danny M. Kaufman; Nicholas Vining; Floraine Berthouzoz

Designers frequently reuse existing designs as a starting point for creating new garments. In order to apply garment modifications, which the designer envisions in 3D, existing tools require meticulous manual editing of 2D patterns. These 2D edits need to account both for the envisioned geometric changes in the 3D shape, as well as for various physical factors that affect the look of the draped garment. We propose a new framework that allows designers to directly apply the changes they envision in 3D space; and creates the 2D patterns that replicate this envisioned target geometry when lifted into 3D via a physical draping simulation. Our framework removes the need for laborious and knowledge-intensive manual 2D edits and allows users to effortlessly mix existing garment designs as well as adjust for garment length and fit. Following each user specified editing operation we first compute a target 3D garment shape, one that maximally preserves the input garments style-its proportions, fit and shape-subject to the modifications specified by the user. We then automatically compute 2D patterns that recreate the target garment shape when draped around the input mannequin within a user-selected simulation environment. To generate these patterns, we propose a fixed-point optimization scheme that compensates for the deformation due to the physical forces affecting the drape and is independent of the underlying simulation tool used. Our experiments show that this method quickly and reliably converges to patterns that, under simulation, form the desired target look, and works well with different black-box physical simulators. We demonstrate a range of edited and resimulated garments, and further validate our approach via expert and amateur critique, and comparisons to alternative solutions.


international conference on computer graphics and interactive techniques | 2015

Visual transcripts: lecture notes from blackboard-style lecture videos

Hijung Valentina Shin; Floraine Berthouzoz; Wilmot Li

Blackboard-style lecture videos are popular, but learning using existing video player interfaces can be challenging. Viewers cannot consume the lecture material at their own pace, and the content is also difficult to search or skim. For these reasons, some people prefer lecture notes to videos. To address these limitations, we present Visual Transcripts, a readable representation of lecture videos that combines visual information with transcript text. To generate a Visual Transcript, we first segment the visual content of a lecture into discrete visual entities that correspond to equations, figures, or lines of text. Then, we analyze the temporal correspondence between the transcript and visuals to determine how sentences relate to visual entities. Finally, we arrange the text and visuals in a linear layout based on these relationships. We compare our result with a standard video player, and a state-of-the-art interface designed specifically for blackboard-style lecture videos. User evaluation suggests that users prefer our interface for learning and that our interface is effective in helping them browse or search through lecture videos.


user interface software and technology | 2015

Capture-Time Feedback for Recording Scripted Narration

Steve Rubin; Floraine Berthouzoz; Gautham J. Mysore; Maneesh Agrawala

Well-performed audio narrations are a hallmark of captivating podcasts, explainer videos, radio stories, and movie trailers. To record these narrations, professional voiceover actors follow guidelines that describe how to use low-level vocal components---volume, pitch, timbre, and tempo---to deliver performances that emphasize important words while maintaining variety, flow and diction. Yet, these techniques are not well-known outside the professional voiceover community, especially among hobbyist producers looking to create their own narrations. We present Narration Coach, an interface that assists novice users in recording scripted narrations. As a user records her narration, our system synchronizes the takes to her script, provides text feedback about how well she is meeting the expert voiceover guidelines, and resynthesizes her recordings to help her hear how she can speak better.


user interface software and technology | 2012

UnderScore: musical underlays for audio stories

Steve Rubin; Floraine Berthouzoz; Gautham J. Mysore; Wilmot Li; Maneesh Agrawala

Audio producers often use musical underlays to emphasize key moments in spoken content and give listeners time to reflect on what was said. Yet, creating such underlays is time-consuming as producers must carefully (1) mark an emphasis point in the speech (2) select music with the appropriate style, (3) align the music with the emphasis point, and (4) adjust dynamics to produce a harmonious composition. We present UnderScore, a set of semi-automated tools designed to facilitate the creation of such underlays. The producer simply marks an emphasis point in the speech and selects a music track. UnderScore automatically refines, aligns and adjusts the speech and music to generate a high-quality underlay. UnderScore allows producers to focus on the high-level design of the underlay; they can quickly try out a variety of music and test different points of emphasis in the story. Amateur producers, who may lack the time or skills necessary to author underlays, can quickly add music to their stories. An informal evaluation of UnderScore suggests that it can produce high-quality underlays for a variety of examples while significantly reducing the time and effort required of radio producers.

Collaboration


Dive into the Floraine Berthouzoz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Rubin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raanan Fattal

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge