Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amit Bermano is active.

Publication


Featured researches published by Amit Bermano.


international conference on computer graphics and interactive techniques | 2013

Augmenting physical avatars using projector-based illumination

Amit Bermano; Philipp Brüschweiler; Anselm Grundhöfer; Daisuke Iwai; Bernd Bickel; Markus H. Gross

Animated animatronic figures are a unique way to give physical presence to a character. However, their movement and expressions are often limited due to mechanical constraints. In this paper, we propose a complete process for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unattainable due to physical constraints.


Computer Graphics Forum | 2012

S hadow P ix : Multiple Images from Self Shadowing

Amit Bermano; Ilya Baran; Marc Alexa; Wojciech Matusk

ShadowPixare white surfaces that display several prescribed images formed by the self‐shadowing of the surface when lit from certain directions. The effect is surprising and not commonly seen in the real world. We present algorithms for constructing ShadowPixthat allow up to four images to be embedded in a single surface. ShadowPixcan produce a variety of unusual effects depending on the embedded images: moving the light can animate or relight the object in the image, or three colored lights may be used to produce a single colored image. ShadowPixare easy to manufacture using a 3D printer and we present photographs, videos, and renderings demonstrating these effects.


ACM Transactions on Graphics | 2014

Facial performance enhancement using dynamic shape space analysis

Amit Bermano; Derek Bradley; Thabo Beeler; Fabio Zünd; Derek Nowrouzezahrai; Ilya Baran; Olga Sorkine-Hornung; Hanspeter Pfister; Robert W. Sumner; Bernd Bickel; Markus H. Gross

The facial performance of an individual is inherently rich in subtle deformation and timing details. Although these subtleties make the performance realistic and compelling, they often elude both motion capture and hand animation. We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology. We employ a high-resolution facial performance capture system to acquire a representative performance of an individual in which he or she explores the full range of facial expressiveness. From the captured data, our system extracts an expressiveness model that encodes subtle spatial and temporal deformation details specific to that particular individual. Once this model has been built, these details can be transferred to low-resolution art-directed performances. We demonstrate results on various forms of input; after our enhancement, the resulting animations exhibit the same nuances and fine spatial details as the captured performance, with optional temporal enhancement to match the dynamics of the actor. Finally, we show that our technique outperforms the current state-of-the-art in example-based facial animation.


international conference on computer graphics and interactive techniques | 2015

Detailed spatio-temporal reconstruction of eyelids

Amit Bermano; Thabo Beeler; Yeara Kozlov; Derek Bradley; Bernd Bickel; Markus H. Gross

In recent years we have seen numerous improvements on 3D scanning and tracking of human faces, greatly advancing the creation of digital doubles for film and video games. However, despite the high-resolution quality of the reconstruction approaches available, current methods are unable to capture one of the most important regions of the face - the eye region. In this work we present the first method for detailed spatio-temporal reconstruction of eyelids. Tracking and reconstructing eyelids is extremely challenging, as this region exhibits very complex and unique skin deformation where skin is folded under while opening the eye. Furthermore, eyelids are often only partially visible and obstructed due to self-occlusion and eyelashes. Our approach is to combine a geometric deformation model with image data, leveraging multi-view stereo, optical flow, contour tracking and wrinkle detection from local skin appearance. Our deformation model serves as a prior that enables reconstruction of eyelids even under strong self-occlusions caused by rolling and folding skin as the eye opens and closes. The output is a person-specific, time-varying eyelid reconstruction with anatomically plausible deformations. Our high-resolution detailed eyelids couple naturally with current facial performance capture approaches. As a result, our method can largely increase the fidelity of facial capture and the creation of digital doubles.


ACM Transactions on Graphics | 2011

Online reconstruction of 3D objects from arbitrary cross-sections

Amit Bermano; Amir Vaxman; Craig Gotsman

We describe a simple algorithm to reconstruct the surface of smooth three-dimensional multilabeled objects from sampled planar cross-sections of arbitrary orientation. The algorithm has the unique ability to handle cross-sections in which regions are classified as being inside the object, outside the object, or unknown. This is achieved by constructing a scalar function on R3, whose zero set is the desired surface. The function is constructed independently inside every cell of the arrangement of the cross-section planes using transfinite interpolation techniques based on barycentric coordinates. These guarantee that the function is smooth, and its zero set interpolates the cross-sections. The algorithm is highly parallelizable and may be implemented as an incremental update as each new cross-section is introduced. This leads to an efficient online version, performed on a GPU, which is suitable for interactive medical applications.


Computer Graphics Forum | 2017

Makeup Lamps: Live Augmentation of Human Faces via Projection

Amit Bermano; Markus Billeter; Daisuke Iwai; Anselm Grundhöfer

We propose the first system for live dynamic augmentation of human faces. Using projector‐based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high‐speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non‐rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.


Computer Graphics Forum | 2017

State of the Art in Methods and Representations for Fabrication-Aware Design

Amit Bermano; Thomas A. Funkhouser; Szymon Rusinkiewicz

Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed‐of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher‐level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain‐specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication‐aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.


international conference on computer graphics and interactive techniques | 2016

Structure-oriented networks of shape collections

Noa Fish; Oliver van Kaick; Amit Bermano; Daniel Cohen-Or

We introduce a co-analysis technique designed for correspondence inference within large shape collections. Such collections are naturally rich in variation, adding ambiguity to the notoriously difficult problem of correspondence computation. We leverage the robustness of correspondences between similar shapes to address the difficulties associated with this problem. In our approach, pairs of similar shapes are extracted from the collection, analyzed and matched in an efficient and reliable manner, culminating in the construction of a network of correspondences that connects the entire collection. The correspondence between any pair of shapes then amounts to a simple propagation along the minimax path between the two shapes in the network. At the heart of our approach is the introduction of a robust, structure-oriented shape matching method. Leveraging the idea of projective analysis, we partition 2D projections of a shape to obtain a set of 1D ordered regions, which are both simple and efficient to match. We lift the matched projections back to the 3D domain to obtain a pairwise shape correspondence. The emphasis given to structural compatibility is a central tool in estimating the reliability and completeness of a computed correspondence, uncovering any non-negligible semantic discrepancies that may exist between shapes. These detected differences are a deciding factor in the establishment of a network aiming to capture local similarities. We demonstrate that the combination of the presented observations into a co-analysis method allows us to establish reliable correspondences among shapes within large collections.


Journal of Cardiovascular Magnetic Resonance | 2015

Single breathhold, three-dimensional measurement of left atrial volume and function using sparse CINE CMR imaging with iterative reconstruction

Pierre Monney; Orestis Vardoulis; Davide Piccini; Amit Bermano; Amir Vaxman; Craig Gotsman; Janine Schwitter; Michael Zenge; Michaela Schmidt; Mariappan S. Nadar; Matthias Stuber; Nikolaos Stergiopulos; Juerg Schwitter

Methods A highly accelerated prototype cine sequence with sparse sampling and Iterative Reconstruction (sCINE-IR) was used in phantoms and patients to acquire 5 cine slices (2 long axis, LAX and 3 short axis, SAX) through the LA during a single breathhold yielding a spatial/temporal resolution of 1.5mm/30ms (1.5T Aera, Siemens AG, Germany). The LA volumes were reconstructed from these 5 slices using a non-model based method (Bermano A, ACM trans Graph 2011). As a reference in patients, a self-navigated high-resolution whole-heart 3D dataset (3D-HR) was acquired during mid-diastole, from which the LA volume was segmented. Phantom study. Five LA phantoms made of solanum tuberosum L of known volume (water displacement method) and of different shapes were imaged with both 3D-HR and CS in various slice orientations and the calculated volumes were compared. Patients study. Three patients were scanned with both 3D-HR and sCINE-IR. The volumes obtained with 3D-HR and with sCINE-IR during the corresponding mid-diastolic frame were compared using Bland-Altman method and linear regression.


Computer Graphics Forum | 2018

Learning A Stroke-Based Representation for Fonts: Learning A Stroke-Based Representation for Fonts

Elena Balashova; Amit Bermano; Vladimir G. Kim; Stephen DiVerdi; Aaron Hertzmann; Thomas A. Funkhouser

Designing fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts and the shared stylistic elements within the same font. To capture these correlations, we propose learning a stroke‐based font representation from a collection of existing typefaces. To enable this, we develop a stroke‐based geometric model for glyphs, a fitting procedure to reparametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low‐dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.

Collaboration


Dive into the Amit Bermano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Bickel

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Gotsman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge