Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura C. Trutoiu is active.

Publication


Featured researches published by Laura C. Trutoiu.


international conference on computer graphics and interactive techniques | 2015

Facial performance sensing head-mounted display

Hao Li; Laura C. Trutoiu; Kyle Olszewski; Lingyu Wei; Tristan Trutna; Pei-Lun Hsieh; Aaron Nicholls; Chongyang Ma

There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a users face, preventing effective facial capture with traditional techniques. To advance virtual reality as a next-generation communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds.


international conference of the ieee engineering in medicine and biology society | 2011

Quantitative measurement of motor symptoms in Parkinson's disease: A study with full-body motion capture data

Samarjit Das; Laura C. Trutoiu; Akihiko Murai; Dunbar Alcindor; Michael Oh; Fernando De la Torre; Jessica K. Hodgins

Recent advancements in the portability and affordability of optical motion capture systems have opened the doors to various clinical applications. In this paper, we look into the potential use of motion capture data for the quantitative analysis of motor symptoms in Parkinsons Disease (PD). The standard of care, human observer-based assessments of the motor symptoms, can be very subjective and are often inadequate for tracking mild symptoms. Motion capture systems, on the other hand, can potentially provide more objective and quantitative assessments. In this pilot study, we perform full-body motion capture of Parkinsons patients with deep brain stimulator off-drugs and with stimulators on and off. Our experimental results indicate that the quantitative measure on spatio-temporal statistics learnt from the motion capture data reveal distinctive differences between mild and severe symptoms. We used a Support Vector Machine (SVM) classifier for discriminating mild vs. severe symptoms with an average accuracy of approximately 90%. Finally, we conclude that motion capture technology could potentially be an accurate, reliable and effective tool for statistical data mining on motor symptoms related to PD. This would enable us to devise more effective ways to track the progression of neurodegenerative movement disorders.


tests and proofs | 2011

Modeling and animating eye blinks

Laura C. Trutoiu; Elizabeth J. Carter; Iain A. Matthews; Jessica K. Hodgins

Facial animation often falls short in conveying the nuances present in the facial dynamics of humans. In this article, we investigate the subtleties of the spatial and temporal aspects of eye blinks. Conventional methods for eye blink animation generally employ temporally and spatially symmetric sequences; however, naturally occurring blinks in humans show a pronounced asymmetry on both dimensions. We present an analysis of naturally occurring blinks that was performed by tracking data from high-speed video using active appearance models. Based on this analysis, we generate a set of key-frame parameters that closely match naturally occurring blinks. We compare the perceived naturalness of blinks that are animated based on real data to those created using textbook animation curves. The eye blinks are animated on two characters, a photorealistic model and a cartoon model, to determine the influence of character style. We find that the animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks.


Computers & Graphics | 2009

Technical Section: Circular, linear, and curvilinear vection in a large-screen virtual environment with floor projection

Laura C. Trutoiu; Betty J. Mohler; J Schulte-Pelkum; Hh Bülthoff

Vection is defined as the compelling sensation of illusory self- motion elicited by a moving sensory, usually visual, stimulus. This paper presents collected introspective data on the experience of linear, circular, and curvilinear vection. We evaluate the differences between twelve different trajectories and the influence of the floor projection on the illusion of self-motion. All of the simulated self- motions examined are of a constant velocity, except for a brief simulated initial acceleration. First, we find that linear translations to the left and right are perceived as the least convincing, while linear down is perceived as the most convincing of the linear trajectories. Second, we find that the floor projection significantly improves the introspective measures of linear vection experienced in a photorealistic three-dimensional town. Finally, we find that while linear forward vection is not perceived to be very convincing, curvilinear forward vection is reported to be as convincing as circular vection. Considering our experimental results, our suggestions for simulators and VE applications where vection is desirable is to increase the number of curvilinear trajectories (as opposed to linear ones) and, if possible, add floor projection in order to improve the illusory sense of self-motion.


eurographics | 2009

Does Brief Exposure to a Self-avatar Affect Common Human Behaviors in Immersive Virtual Environments?

Stephan Streuber; Stephan de la Rosa; Laura C. Trutoiu; Hh Bülthoff; Betty J. Mohler

A plausible assumption is that self-avatars increase the realism of immersive virtual environments (VEs), because self-avatars provide the user with a visual representation of his/her own body. Consequently having a self-avatar might lead to more realistic human behavior in VEs. To test this hypothesis we compared human behavior in VE with and without providing knowledge about a self-avatar with real human behavior in real-space. This comparison was made for three tasks: a locomotion task (moving through the content of the VE), an object interaction task (interacting with the content of the VE), and a social interaction task (interacting with other social entities within the VE). Surprisingly, we did not find effects of a self-avatar exposure on any of these tasks. However, participant’s VE and real world behavior differed significantly. These results challenge the claim that knowledge about the self-avatar substantially influences natural human behavior in immersive VEs.


international symposium on wearable computers | 2016

EyeContact: scleral coil eye tracking for virtual reality

Eric Whitmire; Laura C. Trutoiu; Robert Cavin; David Perek; Brian Scally; James O. Phillips; Shwetak N. Patel

Eye tracking is a technology of growing importance for mobile and wearable systems, particularly for newly emerging virtual and augmented reality applications (VR and AR). Current eye tracking solutions for wearable AR and VR headsets rely on optical tracking and achieve a typical accuracy of 0.5° to 1°. We investigate a high temporal and spatial resolution eye tracking system based on magnetic tracking using scleral search coils. This technique has historically relied on large generator coils several meters in diameter or requires a restraint for the users head. We propose a wearable scleral search coil tracking system that allows the user to walk around, and eliminates the need for a head restraint or room-sized coils. Our technique involves a unique placement of generator coils as well as a new calibration approach that accounts for the less uniform magnetic field created by the smaller coils. Using this technique, we can estimate the orientation of the eye with a mean calibrated accuracy of 0.094°.


tests and proofs | 2014

Spatial and Temporal Linearities in Posed and Spontaneous Smiles

Laura C. Trutoiu; Elizabeth J. Carter; Nancy S. Pollard; Jeffrey F. Cohn; Jessica K. Hodgins

Creating facial animations that convey an animator’s intent is a difficult task because animation techniques are necessarily an approximation of the subtle motion of the face. Some animation techniques may result in linearization of the motion of vertices in space (blendshapes, for example), and other, simpler techniques may result in linearization of the motion in time. In this article, we consider the problem of animating smiles and explore how these simplifications in space and time affect the perceived genuineness of smiles. We create realistic animations of spontaneous and posed smiles from high-resolution motion capture data for two computer-generated characters. The motion capture data is processed to linearize the spatial or temporal properties of the original animation. Through perceptual experiments, we evaluate the genuineness of the resulting smiles. Both space and time impact the perceived genuineness. We also investigate the effect of head motion in the perception of smiles and show similar results for the impact of linearization on animations with and without head motion. Our results indicate that spontaneous smiles are more heavily affected by linearizing the spatial and temporal properties than posed smiles. Moreover, the spontaneous smiles were more affected by temporal linearization than spatial linearization. Our results are in accordance with previous research on linearities in facial animation and allow us to conclude that a model of smiles must include a nonlinear model of velocities.


ieee-ras international conference on humanoid robots | 2009

Effect of foot shape on locomotion of active biped robots

Katsu Yamane; Laura C. Trutoiu

This paper investigates the effect of foot shape on biped locomotion. In particular, we consider planar biped robots whose feet are composed of curved surfaces at toe and heel and a flat section between them. We developed an algorithm that can optimize the gait pattern for a set of foot shape, walk speed and step length. The optimization is formulated based on the rigid-body and collision dynamics of the robot model and tries to minimize the ankle torque. We also divide a step into two phases at collision events and optimize each phase separately with appropriate boundary conditions. Numerical experiments using walk parameters from human motion capture data suggest that having a curved toe and heel would be a way to realize locomotion at speeds comparable to human.


ieee international conference on automatic face gesture recognition | 2013

The temporal connection between smiles and blinks

Laura C. Trutoiu; Jessica K. Hodgins; Jeffrey F. Cohn

In this paper, we present evidence for a temporal relationship between eye blinks and smile dynamics (smile onset and offset). Smiles and blinks occur with high frequency during social interaction, yet little is known about their temporal integration. To explore the temporal relationship between them, we used an Active Appearance Models algorithm to detect eye blinks in video sequences that contained manually FACS-coded spontaneous smiles (AU 12). We then computed the temporal distance between blinks and smile onsets and offsets. Our data shows that eye blinks are correlated with the end of the smile and occur close to the offset, but before the lip corners stop moving downwards. Furthermore, a marginally significant effect suggests that eye blinks are suppressed (less frequent) before smile onset. For computer-generated characters, this model of the timing of blinks relative to smiles may be useful in creating compelling facial animations.


tests and proofs | 2010

Perceptually motivated guidelines for voice synchronization in film

Elizabeth J. Carter; Lavanya Sharan; Laura C. Trutoiu; Iain A. Matthews; Jessica K. Hodgins

We consume video content in a multitude of ways, including in movie theaters, on television, on DVDs and Blu-rays, online, on smart phones, and on portable media players. For quality control purposes, it is important to have a uniform viewing experience across these various platforms. In this work, we focus on voice synchronization, an aspect of video quality that is strongly affected by current post-production and transmission practices. We examined the synchronization of an actors voice and lip movements in two distinct scenarios. First, we simulated the temporal mismatch between the audio and video tracks that can occur during dubbing or during broadcast. Next, we recreated the pitch changes that result from conversions between formats with different frame rates. We show, for the first time, that these audio visual mismatches affect viewer enjoyment. When temporal synchronization is noticeably absent, there is a decrease in the perceived performance quality and the perceived emotional intensity of a performance. For pitch changes, we find that higher pitch voices are not preferred, especially for male actors. Based on our findings, we advise that mismatched audio and video signals negatively affect viewer experience.

Collaboration


Dive into the Laura C. Trutoiu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nancy S. Pollard

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge