Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas W. Cunningham is active.

Publication


Featured researches published by Douglas W. Cunningham.


Journal of Vision | 2008

The contribution of different facial regions to the recognition of conversational expressions

M Nusseck; Douglas W. Cunningham; Christian Wallraven; Hh Bülthoff

The human face is an important and complex communication channel. Humans can, however, easily read in a face not only identity information but also facial expressions with high accuracy. Here, we present the results of four psychophysical experiments in which we systematically manipulated certain facial areas in video sequences of nine conversational expressions to investigate recognition performance and its dependency on the motions of different facial parts. The results help to demonstrate what information is perceptually necessary and sufficient to recognize the different facial expressions. Subsequent analyses of the facial movements and correlation with recognition performance show that, for some expressions, one individual facial region can represent the whole expression. In other cases, the interaction of more than one facial area is needed to clarify the expression. The full set of results is used to develop a systematic description of the roles of different facial parts in the visual perception of conversational facial expressions.


Psychological Science | 2001

Sensorimotor Adaptation to Violations of Temporal Contiguity

Douglas W. Cunningham; Vincent A. Billock; Brian H. Tsou

Most events are processed by a number of neural pathways. These pathways often differ considerably in processing speed. Thus, coherent perception requires some form of synchronization mechanism. Moreover, this mechanism must be flexible, because neural processing speed changes over the life of an organism. Here we provide behavioral evidence that humans can adapt to a new intersensory temporal relationship (which was artificially produced by delaying visual feedback). The conflict between these results and previous work that failed to find such improvements can be explained by considering the present results as a form of sensorimotor adaptation.


Journal of Vision | 2009

Dynamic information for the recognition of conversational expressions

Douglas W. Cunningham; Christian Wallraven

Communication is critical for normal, everyday life. During a conversation, information is conveyed in a number of ways, including through body, head, and facial changes. While much research has examined these latter forms of communication, the majority of it has focused on static representations of a few, supposedly universal expressions. Normal conversations, however, contain a very wide variety of expressions and are rarely, if ever, static. Here, we report several experiments that show that expressions that use head, eye, and internal facial motion are recognized more easily and accurately than static versions of those expressions. Moreover, we demonstrate conclusively that this dynamic advantage is due to information that is only available over time, and that the temporal integration window for this information is at least 100 ms long.


Journal of Vision | 2001

Driving in the Future: Temporal Visuomotor Adaptation and Generalization

Douglas W. Cunningham; A Chatziastros; Markus Heyde; Hh Bülthoff

Rapid and accurate visuomotor coordination requires tight spatial and temporal sensorimotor synchronization. The introduction of a sensorimotor or intersensory misalignment (either spatial or temporal) impairs performance on most tasks. For more than a century, it has been known that a few minutes of exposure to a spatial misalignment can induce a recalibration of sensorimotor spatial relationships, a phenomenon that may be referred to as spatial visuomotor adaptation. Here, we use a high-fidelity driving simulator to demonstrate that the sensorimotor system can adapt to temporal misalignments on very complex tasks, a phenomenon that we refer to as temporal visuomotor adaptation. We demonstrate that adapting on a single street produces an adaptive state that generalizes to other streets. This shows that temporal visuomotor adaptation is not specific to a single visuomotor transformation, but generalizes across a class of transformations. Temporal visuomotor adaptation is strikingly parallel to spatial visuomotor adaptation, and has strong implications for the understanding of visuomotor coordination and intersensory integration.


tests and proofs | 2008

Evaluating the perceptual realism of animated facial expressions

Christian Wallraven; Martin Breidt; Douglas W. Cunningham; Hh Bülthoff

The human face is capable of producing an astonishing variety of expressions—expressions for which sometimes the smallest difference changes the perceived meaning considerably. Producing realistic-looking facial animations that are able to transmit this degree of complexity continues to be a challenging research topic in computer graphics. One important question that remains to be answered is: When are facial animations good enough? Here we present an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of several different computer-generated animations with respect to real-world video sequences. The first experiment provides an evaluation of several animation techniques, exposing specific animation parameters that are important to achieve perceptual fidelity. In a second experiment, we then use these benchmarked animation techniques in the context of perceptual research in order to systematically investigate the spatiotemporal characteristics of expressions. A third and final experiment uses the quality measures that were developed in the first two experiments to examine the perceptual impact of changing facial features to improve the animation techniques. Using such an integrated approach, we are able to provide important insights into facial expressions for both the perceptual and computer graphics community.


PLOS ONE | 2012

The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

K Kaulard; Douglas W. Cunningham; Hh Bülthoff; Christian Wallraven

The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.


Progress in Brain Research | 2006

Processing of identity and emotion in faces: a psychophysical, physiological and computational perspective

Adrian Schwaninger; Christian Wallraven; Douglas W. Cunningham; Sarah D. Chiller-Glaus

A deeper understanding of how the brain processes visual information can be obtained by comparing results from complementary fields such as psychophysics, physiology, and computer science. In this chapter, empirical findings are reviewed with regard to the proposed mechanisms and representations for processing identity and emotion in faces. Results from psychophysics clearly show that faces are processed by analyzing component information (eyes, nose, mouth, etc.) and their spatial relationship (configural information). Results from neuroscience indicate separate neural systems for recognition of identity and facial expression. Computer science offers a deeper understanding of the required algorithms and representations, and provides computational modeling of psychological and physiological accounts. An interdisciplinary approach taking these different perspectives into account provides a promising basis for better understanding and modeling of how the human brain processes visual information for recognition of identity and emotion in faces.


tests and proofs | 2005

Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions

Douglas W. Cunningham; Mario Kleiner; Christian Wallraven; Hh Bülthoff

Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freez ” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.


Computers & Graphics | 2009

Computational Aesthetics 2008: Categorizing art: Comparing humans and computers

Christian Wallraven; Roland W. Fleming; Douglas W. Cunningham; Jaume Rigau; Miquel Feixas; Mateu Sbert

The categorization of art (paintings, literature) into distinct styles such as Expressionism, or Surrealism has had a profound influence on how art is presented, marketed, analyzed, and historicized. Here, we present results from human and computational experiments with the goal of determining to which degree such categories can be explained by simple, low-level appearance information in the image. Following experimental methods from perceptual psychology on category formation, naive, non-expert participants were first asked to sort printouts of artworks from different art periods into categories. Converting these data into similarity data and running a multi-dimensional scaling (MDS) analysis, we found distinct categories which corresponded sometimes surprisingly well to canonical art periods. The result was cross-validated on two complementary sets of artworks for two different groups of participants showing the stability of art interpretation. The second focus of this paper was on determining how far computational algorithms would be able to capture human performance or would be able in general to separate different art categories. Using several state-of-the-art algorithms from computer vision, we found that whereas low-level appearance information can give some clues about category membership, human grouping strategies included also much higher-level concepts.


Archive | 2011

Experimental Design: From User Studies to Psychophysics

Douglas W. Cunningham; Christian Wallraven

As computers proliferate and as the field of computer graphics matures, it has become increasingly important for computer scientists to understand how users perceive and interpret computer graphics. Experimental Design: From User Studies to Psychophysics is an accessible introduction to psychological experiments and experimental design, covering the major components in the design, execution, and analysis of perceptual studies. The book begins with an introduction to the concepts central to designing and understanding experiments, including developing a research question, setting conditions and controls, and balancing specificity with generality. The book then explores in detail a number of types of experimental tasks: free description, rating scales, forced-choice, specialized multiple choice, and real-world tasks as well as physiological studies. It discusses the advantages and disadvantages of each type and provides examples of that type of experiment from the authors own work. The book also covers stimulus-related issues, including popular stimulus resources. It concludes with a thorough examination of statistical techniques for analyzing results, including methods specific to individual tasks.

Collaboration


Dive into the Douglas W. Cunningham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susana Castillo

Brandenburg University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge