Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth J. Carter is active.

Publication


Featured researches published by Elizabeth J. Carter.


Psychological Science | 2009

Action Understanding in the Superior Temporal Sulcus Region

Brent C. Vander Wyk; Caitlin M. Hudac; Elizabeth J. Carter; David M. Sobel; Kevin A. Pelphrey

The posterior superior temporal sulcus (STS) region plays an important role in the perception of social acts, although its full role has not been completely clarified. This functional magnetic resonance imaging experiment examined activity in the STS region as participants viewed actions that were congruent or incongruent with intentions established by a previous emotional context. Participants viewed an actress express either a positive or a negative emotion toward one of two objects and then subsequently pick up one of them. If the object that was picked up had received positive regard, or if the object that was not picked up had received negative regard, the action was congruent; otherwise, the action was incongruent. Activity in the right posterior STS region was sensitive to the congruency between the action and the actresss emotional expression (i.e., STS activity was greater on incongruent than on congruent trials). These findings suggest that the posterior STS represents not only biological motion, but also how another persons motion is related to his or her intentions.


Development and Psychopathology | 2008

Charting the typical and atypical development of the social brain

Kevin A. Pelphrey; Elizabeth J. Carter

We describe recent progress in our program of research that aims to use functional magnetic resonance imaging (fMRI) to identify and delineate the brain systems involved in social perception and to chart the development of those systems and their roles as mechanisms supporting the development of social cognition in children, adolescents, and adults with and without autism. This research program was initiated with the intention of further specifying the role of the posterior superior temporal sulcus (STS) region in the network of neuroanatomical structures comprising the social brain. Initially, this work focused on evaluating STS function when typically developing adults were engaged in the visual analysis of other peoples actions and intentions. We concluded that that the STS region plays an important role in social perception via its involvement in representing and predicting the actions and social intentions of other people from an analysis of biological-motion cues. These studies of typically developing people provided a set of core findings and a methodological approach that informed a set of fMRI studies of social perception dysfunction in autism. The work has established that dysfunction in the STS region, as well as reduced connectivity between this region and other social brain structures including the fusiform gyrus and amygdala, play a role in the pathophysiology of social perception deficits in autism. Most recently, this research program has incorporated a developmental perspective in beginning to chart the development of the STS region in children with and without autism.


international conference on computer graphics and interactive techniques | 2013

Style and abstraction in portrait sketching

Itamar Berger; Ariel Shamir; Moshe Mahler; Elizabeth J. Carter; Jessica K. Hodgins

We use a data-driven approach to study both style and abstraction in sketching of a human face. We gather and analyze data from a number of artists as they sketch a human face from a reference photograph. To achieve different levels of abstraction in the sketches, decreasing time limits were imposed -- from four and a half minutes to fifteen seconds. We analyzed the data at two levels: strokes and geometric shape. In each, we create a model that captures both the style of the different artists and the process of abstraction. These models are then used for a portrait sketch synthesis application. Starting from a novel face photograph, we can synthesize a sketch in the various artistic styles and in different levels of abstraction.


NeuroImage | 2011

Exploring the neural correlates of goal-directed action and intention understanding

Elizabeth J. Carter; Jessica K. Hodgins; David H. Rakison

Because we are a cooperative species, understanding the goals and intentions of others is critical for human survival. In this fMRI study, participants viewed reaching behaviors in which one of four animated characters moved a hand towards one of two objects and either (a) picked up the object, (b) missed the object, or (c) changed his path halfway to lift the other object. The characters included a human, a humanoid robot, stacked boxes with an arm, and a mechanical claw. The first three moved in an identical, human-like biological pattern. Right posterior superior temporal sulcus (pSTS) activity increased when the human or humanoid robot shifted goals or missed the target relative to obtaining the original goal. This suggests that the pSTS was engaged differentially for figures that appeared more human-like rather than for all human-like motion. Medial frontal areas that are part of a protagonist-monitoring network with the right pSTS (e.g., Mason and Just, 2006) were most engaged for the human character, followed by the robot character. The current data suggest that goal-directed action and intention understanding require this network and it is used similarly for the two processes. Moreover, it is modulated by character identity rather than only the presence of biological motion. We discuss the implications for behavioral theories of goal-directed action and intention understanding.


tests and proofs | 2011

Modeling and animating eye blinks

Laura C. Trutoiu; Elizabeth J. Carter; Iain A. Matthews; Jessica K. Hodgins

Facial animation often falls short in conveying the nuances present in the facial dynamics of humans. In this article, we investigate the subtleties of the spatial and temporal aspects of eye blinks. Conventional methods for eye blink animation generally employ temporally and spatially symmetric sequences; however, naturally occurring blinks in humans show a pronounced asymmetry on both dimensions. We present an analysis of naturally occurring blinks that was performed by tracking data from high-speed video using active appearance models. Based on this analysis, we generate a set of key-frame parameters that closely match naturally occurring blinks. We compare the perceived naturalness of blinks that are animated based on real data to those created using textbook animation curves. The eye blinks are animated on two characters, a photorealistic model and a cartoon model, to determine the influence of character style. We find that the animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks.


human factors in computing systems | 2016

PaperID: A Technique for Drawing Functional Battery-Free Wireless Interfaces on Paper

Hanchuan Li; Eric Brockmeyer; Elizabeth J. Carter; Josh Fromm; Scott E. Hudson; Shwetak N. Patel; Alanson P. Sample

We describe techniques that allow inexpensive, ultra-thin, battery-free Radio Frequency Identification (RFID) tags to be turned into simple paper input devices. We use sensing and signal processing techniques that determine how a tag is being manipulated by the user via an RFID reader and show how tags may be enhanced with a simple set of conductive traces that can be printed on paper, stencil-traced, or even hand-drawn. These traces modify the behavior of contiguous tags to serve as input devices. Our techniques provide the capability to use off-the-shelf RFID tags to sense touch, cover, overlap of tags by conductive or dielectric (insulating) materials, and tag movement trajectories. Paper prototypes can be made functional in seconds. Due to the rapid deployability and low cost of the tags used, we can create a new class of interactive paper devices that are drawn on demand for simple tasks. These capabilities allow new interactive possibilities for pop-up books and other papercraft objects.


Social Neuroscience | 2008

Friend or foe? Brain systems involved in the perception of dynamic signals of menacing and friendly social approaches

Elizabeth J. Carter; Kevin A. Pelphrey

Abstract During every social approach, humans must assess each others intentions. Facial expressions provide cues to assist in these assessments via associations with emotion, the likelihood of affiliation, and personality. In this functional magnetic resonance imaging (fMRI) study, participants viewed animated male characters approaching them in a hallway and making either a happy or an angry facial expression. An expected increase in amygdala and superior temporal sulcus activation to the expression of anger was found. Notably, two other social brain regions also had an increased hemodynamic response to anger relative to happiness, including the lateral fusiform gyrus and a region centered in the middle temporal gyrus. Other brain regions showed little differentiation or an increased level of activity to the happy stimuli. These findings provide insight into the brain mechanisms involved in reading the intentions of other human beings in an overtly social context. In particular, they demonstrate brain regions sensitive to social signals of dominance and affiliation.


PLOS ONE | 2012

Is He Being Bad? Social and Language Brain Networks during Social Judgment in Children with Autism

Elizabeth J. Carter; Diane L. Williams; Nancy J. Minshew; Jill Fain Lehman

Individuals with autism often violate social rules and have lower accuracy in identifying and explaining inappropriate social behavior. Twelve children with autism (AD) and thirteen children with typical development (TD) participated in this fMRI study of the neurofunctional basis of social judgment. Participants indicated in which of two pictures a boy was being bad (Social condition) or which of two pictures was outdoors (Physical condition). In the within-group Social–Physical comparison, TD children used components of mentalizing and language networks [bilateral inferior frontal gyrus (IFG), bilateral medial prefrontal cortex (mPFC), and bilateral posterior superior temporal sulcus (pSTS)], whereas AD children used a network that was primarily right IFG and bilateral pSTS, suggesting reduced use of social and language networks during this social judgment task. A direct group comparison on the Social–Physical contrast showed that the TD group had greater mPFC, bilateral IFG, and left superior temporal pole activity than the AD group. No regions were more active in the AD group than in the group with TD in this comparison. Both groups successfully performed the task, which required minimal language. The groups also performed similarly on eyetracking measures, indicating that the activation results probably reflect the use of a more basic strategy by the autism group rather than performance disparities. Even though language was unnecessary, the children with TD recruited language areas during the social task, suggesting automatic encoding of their knowledge into language; however, this was not the case for the children with autism. These findings support behavioral research indicating that, whereas children with autism may recognize socially inappropriate behavior, they have difficulty using spoken language to explain why it is inappropriate. The fMRI results indicate that AD children may not automatically use language to encode their social understanding, making expression and generalization of this knowledge more difficult.


workshop on applications of computer vision | 2014

Predicting movie ratings from audience behaviors

Rajitha Navarathna; Patrick Lucey; Peter Carr; Elizabeth J. Carter; Sridha Sridharan; Iain A. Matthews

We propose a method of representing audience behavior through facial and body motions from a single video stream, and use these features to predict the rating for feature-length movies. This is a very challenging problem as: i) the movie viewing environment is dark and contains views of people at different scales and viewpoints; ii) the duration of feature-length movies is long (80-120 mins) so tracking people uninterrupted for this length of time is still an unsolved problem; and iii) expressions and motions of audience members are subtle, short and sparse making labeling of activities unreliable. To circumvent these issues, we use an infrared illuminated test-bed to obtain a visually uniform input. We then utilize motion-history features which capture the subtle movements of a person within a pre-defined volume, and then form a group representation of the audience by a histogram of pair-wise correlations over a small-window of time. Using this group representation, we learn our movie rating classifier from crowd-sourced ratings collected by rottentomatoes.com and show our prediction capability on audiences from 30 movies across 250 subjects (> 50 hrs).


international conference on computer graphics and interactive techniques | 2015

A perceptual control space for garment simulation

Leonid Sigal; Moshe Mahler; Spencer Diaz; Kyna McIntosh; Elizabeth J. Carter; Timothy Richards; Jessica K. Hodgins

We present a perceptual control space for simulation of cloth that works with any physical simulator, treating it as a black box. The perceptual control space provides intuitive, art-directable control over the simulation behavior based on a learned mapping from common descriptors for cloth (e.g., flowiness, softness) to the parameters of the simulation. To learn the mapping, we perform a series of perceptual experiments in which the simulation parameters are varied and participants assess the values of the common terms of the cloth on a scale. A multi-dimensional sub-space regression is performed on the results to build a perceptual generative model over the simulator parameters. We evaluate the perceptual control space by demonstrating that the generative model does in fact create simulated clothing that is rated by participants as having the expected properties. We also show that this perceptual control space generalizes to garments and motions not in the original experiments.

Collaboration


Dive into the Elizabeth J. Carter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Hyde

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin A. Pelphrey

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Steinfeld

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura C. Trutoiu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sara Kiesler

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge