Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Wallraven is active.

Publication


Featured researches published by Christian Wallraven.


Lecture Notes in Computer Science | 2000

Biologically Motivated Computer Vision: Second International Workshop

Hh Bülthoff; Lee S-W, Poggio, Ta; Christian Wallraven

Recent experimental work has shown that the primate visual system can analyze complex natural scenes in only 100-150 ms. Such data, when combined with anatomical and physiological knowledge, seriously constrains current models of visual processing. In particular, it suggests that a lot of processing can be achieved using a single feed-forward pass through the visual system, and that each processing layer probably has no more than around 10 ms before the next stage has to respond. In this time, few neurons will have generated more than one spike, ruling out most conventional rate coding models. We have been exploring the possibility of using the fact that strongly activated neurons tend to fire early and that information can be encoded in the order in which a population of cells fire. These ideas have been tested using SpikeNet, a computer program that simulates the activity of very large networks of asynchronously firing neurons. The results have been extremely promising, and we have been able to develop artificial visual systems capable of processing complex natural scenes in real time using standard computer hardware (see http://www.spikenet-technology.com).


Journal of Vision | 2008

The contribution of different facial regions to the recognition of conversational expressions

M Nusseck; Douglas W. Cunningham; Christian Wallraven; Hh Bülthoff

The human face is an important and complex communication channel. Humans can, however, easily read in a face not only identity information but also facial expressions with high accuracy. Here, we present the results of four psychophysical experiments in which we systematically manipulated certain facial areas in video sequences of nine conversational expressions to investigate recognition performance and its dependency on the motions of different facial parts. The results help to demonstrate what information is perceptually necessary and sufficient to recognize the different facial expressions. Subsequent analyses of the facial movements and correlation with recognition performance show that, for some expressions, one individual facial region can represent the whole expression. In other cases, the interaction of more than one facial area is needed to clarify the expression. The full set of results is used to develop a systematic description of the roles of different facial parts in the visual perception of conversational facial expressions.


Current Biology | 2009

Humans and Macaques Employ Similar Face-Processing Strategies

Christoph D. Dahl; Christian Wallraven; Hh Bülthoff; Nk Logothetis

Primates developed the ability to recognize and individuate their conspecifics by the face. Despite numerous electrophysiological studies in monkeys, little is known about the face-processing strategies that monkeys employ. In contrast, face perception in humans has been the subject of many studies providing evidence for specific face processing that evolves with perceptual expertise. Importantly, humans process faces holistically, here defined as the processing of faces as wholes, rather than as collections of independent features (part-based processing). The question remains to what extent humans and monkeys share these face-processing mechanisms. By using the same experimental design and stimuli for both monkey and human behavioral experiments, we show that face processing is influenced by the species affiliation of the observed face stimulus (human versus macaque face). Furthermore, stimulus manipulations that selectively reduced holistic and part-based information systematically altered eye-scanning patterns for human and macaque observers similarly. These results demonstrate the similar nature of face perception in humans and monkeys and pin down effects of expert face-processing versus novice face-processing strategies. These findings therefore directly contribute to one of the central discussions in the behavioral and neurosciences about how faces are perceived in primates.


Journal of Vision | 2009

Dynamic information for the recognition of conversational expressions

Douglas W. Cunningham; Christian Wallraven

Communication is critical for normal, everyday life. During a conversation, information is conveyed in a number of ways, including through body, head, and facial changes. While much research has examined these latter forms of communication, the majority of it has focused on static representations of a few, supposedly universal expressions. Normal conversations, however, contain a very wide variety of expressions and are rarely, if ever, static. Here, we report several experiments that show that expressions that use head, eye, and internal facial motion are recognized more easily and accurately than static versions of those expressions. Moreover, we demonstrate conclusively that this dynamic advantage is due to information that is only available over time, and that the temporal integration window for this information is at least 100 ms long.


Neuropsychologia | 2007

Multimodal Similarity and Categorization of Novel, Three-Dimensional Objects

Theresa Cooke; Frank Jäkel; Christian Wallraven; Hh Bülthoff

Similarity has been proposed as a fundamental principle underlying mental object representations and capable of supporting cognitive-level tasks such as categorization. However, much of the research has considered connections between similarity and categorization for tasks performed using a single perceptual modality. Considering similarity and categorization within a multimodal context opens up a number of important questions: Are the similarities between objects the same when they are perceived using different modalities or using more than one modality at a time? Is similarity still able to explain categorization performance when objects are experienced multimodally? In this study, we addressed these questions by having subjects explore novel, 3D objects which varied parametrically in shape and texture using vision alone, touch alone, or touch and vision together. Subjects then performed a pair-wise similarity rating task and a free sorting categorization task. Multidimensional scaling (MDS) analysis of similarity data revealed that a single underlying perceptual map whose dimensions corresponded to shape and texture could explain visual, haptic, and bimodal similarity ratings. However, the relative dimension weights varied according to modality: shape dominated texture when objects were seen, whereas shape and texture were roughly equally important in the haptic and bimodal conditions. Some evidence was found for a multimodal connection between similarity and categorization: the probability of category membership increased with similarity while the probability of a category boundary being placed between two stimuli decreased with similarity. In addition, dimension weights varied according to modality in the same way for both tasks. The study also demonstrates the usefulness of 3D printing technology and MDS techniques in the study of visuohaptic object processing.


PLOS ONE | 2014

Decreased Peripheral and Central Responses to Acupuncture Stimulation following Modification of Body Ownership

Younbyoung Chae; In Seon Lee; Won Mo Jung; Dong Seon Chang; Vitaly Napadow; Hyejung Lee; Hi Joon Park; Christian Wallraven

Acupuncture stimulation increases local blood flow around the site of stimulation and induces signal changes in brain regions related to the body matrix. The rubber hand illusion (RHI) is an experimental paradigm that manipulates important aspects of bodily self-awareness. The present study aimed to investigate how modifications of body ownership using the RHI affect local blood flow and cerebral responses during acupuncture needle stimulation. During the RHI, acupuncture needle stimulation was applied to the real left hand while measuring blood microcirculation with a LASER Doppler imager (Experiment 1, N = 28) and concurrent brain signal changes using functional magnetic resonance imaging (fMRI; Experiment 2, N = 17). When the body ownership of participants was altered by the RHI, acupuncture stimulation resulted in a significantly lower increase in local blood flow (Experiment 1), and significantly less brain activation was detected in the right insula (Experiment 2). This study found changes in both local blood flow and brain responses during acupuncture needle stimulation following modification of body ownership. These findings suggest that physiological responses during acupuncture stimulation can be influenced by the modification of body ownership.


international conference on computer vision | 2011

Going into depth: Evaluating 2D and 3D cues for object classification on a new, large-scale object dataset

Björn Browatzki; Jan Fischer; Birgit Graf; Hh Bülthoff; Christian Wallraven

Categorization of objects solely based on shape and appearance is still a largely unresolved issue. With the advent of new sensor technologies, such as consumer-level range sensors, new possibilities for shape processing have become available for a range of new application domains. In the first part of this paper, we introduce a novel, large dataset containing 18 categories of objects found in typical household and office environments—we envision this dataset to be useful in many applications ranging from robotics to computer vision. The second part of the paper presents computational experiments on object categorization with classifiers exploiting both two-dimensional and three-dimensional information. We evaluate categorization performance for both modalities in separate and combined representations and demonstrate the advantages of using range data for object and shape processing skills.


tests and proofs | 2008

Evaluating the perceptual realism of animated facial expressions

Christian Wallraven; Martin Breidt; Douglas W. Cunningham; Hh Bülthoff

The human face is capable of producing an astonishing variety of expressions—expressions for which sometimes the smallest difference changes the perceived meaning considerably. Producing realistic-looking facial animations that are able to transmit this degree of complexity continues to be a challenging research topic in computer graphics. One important question that remains to be answered is: When are facial animations good enough? Here we present an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of several different computer-generated animations with respect to real-world video sequences. The first experiment provides an evaluation of several animation techniques, exposing specific animation parameters that are important to achieve perceptual fidelity. In a second experiment, we then use these benchmarked animation techniques in the context of perceptual research in order to systematically investigate the spatiotemporal characteristics of expressions. A third and final experiment uses the quality measures that were developed in the first two experiments to examine the perceptual impact of changing facial features to improve the animation techniques. Using such an integrated approach, we are able to provide important insights into facial expressions for both the perceptual and computer graphics community.


Frontiers in Human Neuroscience | 2014

Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

Janina Esins; J Schultz; Christian Wallraven; I Bülthoff

Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.


PLOS ONE | 2012

The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

K Kaulard; Douglas W. Cunningham; Hh Bülthoff; Christian Wallraven

The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.

Collaboration


Dive into the Christian Wallraven's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge