Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moshe Mahler is active.

Publication


Featured researches published by Moshe Mahler.


user interface software and technology | 2011

SideBySide: ad-hoc multi-user interaction with handheld projectors

Karl D.D. Willis; Ivan Poupyrev; Scott E. Hudson; Moshe Mahler

We introduce SideBySide, a system designed for ad-hoc multi-user interaction with handheld projectors. SideBySide uses device-mounted cameras and hybrid visible/infrared light projectors to track multiple independent projected images in relation to one another. This is accomplished by projecting invisible fiducial markers in the near-infrared spectrum. Our system is completely self-contained and can be deployed as a handheld device without instrumentation of the environment. We present the design and implementation of our system including a hybrid handheld projector to project visible and infrared light, and techniques for tracking projected fiducial markers that move and overlap. We introduce a range of example applications that demonstrate the applicability of our system to real-world scenarios such as mobile content exchange, gaming, and education.


symposium on computer animation | 2012

Dynamic units of visual speech

Sarah Taylor; Moshe Mahler; Barry-John Theobald; Iain A. Matthews

We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation.


international conference on computer graphics and interactive techniques | 2013

Style and abstraction in portrait sketching

Itamar Berger; Ariel Shamir; Moshe Mahler; Elizabeth J. Carter; Jessica K. Hodgins

We use a data-driven approach to study both style and abstraction in sketching of a human face. We gather and analyze data from a number of artists as they sketch a human face from a reference photograph. To achieve different levels of abstraction in the sketches, decreasing time limits were imposed -- from four and a half minutes to fifteen seconds. We analyzed the data at two levels: strokes and geometric shape. In each, we create a model that captures both the style of the different artists and the process of abstraction. These models are then used for a portrait sketch synthesis application. Starting from a novel face photograph, we can synthesize a sketch in the various artistic styles and in different levels of abstraction.


tests and proofs | 2010

The saliency of anomalies in animated human characters

Jessica K. Hodgins; Sophie Jörg; Carol O'Sullivan; Sang Il Park; Moshe Mahler

Virtual characters are much in demand for animated movies, games, and other applications. Rapid advances in performance capture and advanced rendering techniques have allowed the movie industry in particular to create characters that appear very human-like. However, with these new capabilities has come the realization that such characters are yet not quite “right.” One possible hypothesis is that these virtual humans fall into an “Uncanny Valley”, where the viewers emotional response is repulsion or rejection, rather than the empathy or emotional engagement that their creators had hoped for. To explore these issues, we created three animated vignettes of an arguing couple with detailed motion for the face, eyes, hair, and body. In a set of perceptual experiments, we explore the relative importance of different anomalies using two different methods: a questionnaire to determine the emotional response to the full-length vignettes, with and without facial motion and audio; and a 2AFC (two alternative forced choice) task to compare the performance of a virtual “actor” in short clips (extracts from the vignettes) depicting a range of different facial and body anomalies. We found that the facial anomalies are particularly salient, even when very significant body animation anomalies are present.


tangible and embedded interaction | 2013

HideOut: mobile projector interaction with tangible objects and surfaces

Karl D.D. Willis; Takaaki Shiratori; Moshe Mahler

HideOut is a mobile projector-based system that enables new applications and interaction techniques with tangible objects and surfaces. HideOut uses a device mounted camera to detect hidden markers applied with infrared-absorbing ink. The obtrusive appearance of fiducial markers is avoided and the hidden marker surface doubles as a functional projection surface. We present example applications that demonstrate a wide range of interaction scenarios, including media navigation tools, interactive storytelling applications, and mobile games. We explore the design space enabled by the HideOut system and describe the hidden marker prototyping process. HideOut brings tangible objects to life for interaction with the physical world around us.


international conference on computer graphics and interactive techniques | 2015

A perceptual control space for garment simulation

Leonid Sigal; Moshe Mahler; Spencer Diaz; Kyna McIntosh; Elizabeth J. Carter; Timothy Richards; Jessica K. Hodgins

We present a perceptual control space for simulation of cloth that works with any physical simulator, treating it as a black box. The perceptual control space provides intuitive, art-directable control over the simulation behavior based on a learned mapping from common descriptors for cloth (e.g., flowiness, softness) to the parameters of the simulation. To learn the mapping, we perform a series of perceptual experiments in which the simulation parameters are varied and participants assess the values of the common terms of the cloth on a scale. A multi-dimensional sub-space regression is performed on the results to build a perceptual generative model over the simulator parameters. We evaluate the perceptual control space by demonstrating that the generative model does in fact create simulated clothing that is rated by participants as having the expected properties. We also show that this perceptual control space generalizes to garments and motions not in the original experiments.


ACM Transactions on Graphics | 2012

Three-dimensional proxies for hand-drawn characters

Eakta Jain; Yaser Sheikh; Moshe Mahler; Jessica K. Hodgins

Drawing shapes by hand and manipulating computer-generated objects are the two dominant forms of animation. Though each medium has its own advantages, the techniques developed for one medium are not easily leveraged in the other medium because hand animation is two-dimensional, and inferring the third dimension is mathematically ambiguous. A second challenge is that the character is a consistent three-dimensional (3D) object in computer animation while hand animators introduce geometric inconsistencies in the two-dimensional (2D) shapes to better convey a characters emotional state and personality. In this work, we identify 3D proxies to connect hand-drawn animation and 3D computer animation. We present an integrated approach to generate three levels of 3D proxies: single-points, polygonal shapes, and a full joint hierarchy. We demonstrate how this approach enables one medium to take advantage of techniques developed for the other; for example, 3D physical simulation is used to create clothes for a hand-animated character, and a traditionally trained animator is able to influence the performance of a 3D character while drawing with paper and pencil.


symposium on 3d user interfaces | 2013

Expressing animated performances through puppeteering

Takaaki Shiratori; Moshe Mahler; Warren Trezevant; Jessica K. Hodgins

An essential form of communication between the director and the animators early in the animation pipeline is rough cut at the motion (a blocked-in animation). This version of the characters performance allows the director and animators to discuss how the character will play his/her role in each scene. However, blocked-in animation is also quite time consuming to construct, with short scenes requiring many hours of preparation between presentations. In this paper, we present a puppeteering interface for creating blocked-in motion for characters and various simulation effects more quickly than is possible in a keyframing interface. The animator manipulates one of a set of tracked objects in a motion capture system to control a few degrees of freedom of the character on each take. We explore the design space for the 3D puppeteering interface with a set of seven professional animators using a “think-aloud” protocol. We present a number of animations that they created and compare the time required to create similar animations in our 3D user interface and a commercial keyframing interface.


ACM Transactions on Graphics | 2017

A deep learning approach for generalized speech animation

Sarah Taylor; Taehwan Kim; Yisong Yue; Moshe Mahler; James Krahe; Anastasio Garcia Rodriguez; Jessica K. Hodgins; Iain A. Matthews

We introduce a simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. Our approach uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. Our deep learning approach enjoys several attractive properties: it runs in real-time, requires minimal parameter tuning, generalizes well to novel input speech sequences, is easily edited to create stylized and emotional speech, and is compatible with existing animation retargeting approaches. One important focus of our work is to develop an effective approach for speech animation that can be easily integrated into existing production pipelines. We provide a detailed description of our end-to-end approach, including machine learning design decisions. Generalized speech animation results are demonstrated over a wide range of animation clips on a variety of characters and voices, including singing and foreign language input. Our approach can also generate on-demand speech animation in real-time from user speech input.


IEEE Computer Graphics and Applications | 2013

Fabricating 3D Figurines with Personalized Faces

J. Rafael Tena; Moshe Mahler; Thabo Beeler; Max Grosse; Hengchin Yeh; Iain A. Matthews

We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer.

Collaboration


Dive into the Moshe Mahler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ariel Shamir

Interdisciplinary Center Herzliya

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karl D.D. Willis

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge