Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K Kaulard is active.

Publication


Featured researches published by K Kaulard.


PLOS ONE | 2012

The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

K Kaulard; Douglas W. Cunningham; Hh Bülthoff; Christian Wallraven

The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.


Spatial Vision | 2007

Examining art: dissociating pattern and perceptual influences on oculomotor behaviour

Benjamin W. Tatler; Nicholas J. Wade; K Kaulard

When observing art the viewers understanding results from the interplay between the marks made on the surface by the artist and the viewers perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewers perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewers perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artists creation and viewers perception when viewing art and also toward a better understanding of the principles of composition in portraiture.


PLOS ONE | 2017

Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics

Dionysios Perdikis; Jakob Volhard; Viktor Müller; K Kaulard; Timothy R. Brick; Christian Wallraven; Ulman Lindenberger

Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes.


34th European Conference on Visual Perception | 2011

What are the properties underlying similarity judgments of facial expressions

K Kaulard; S de la Rosa; J Schultz; Al Fernandez Cruz; Hh Bülthoff; Christian Wallraven

Newer technology allows for more realistic virtual environments by providing visual image quality that is very similar to that in the real world, this includes adding in virtual self-animated avatars [Slater et al, 2010 PLoS ONE 5(5); Sanchez-Vives et al, 2010 PLoS ONE 5(4)]. To investigate the influence of relative size changes between the visual environment and the visual body, we immersed participants into a full cue virtual environment where they viewed a self-animated avatar from behind and at the same eye-height as the avatar. We systematically manipulated the size of the avatar and the size of the virtual room (which included familiar objects). Both before and after exposure to the virtual room and body, participants performed an action-based measurement and made verbal estimates about the size of self and the world. Additionally we measured their subjective sense of body ownership. The results indicate that the size of the self-representing avatar can change how the user perceives and interacts within the virtual environment. These results have implications for scientists interested in visual space perception and also could potentially be useful for creating positive visual illusions (ie the feeling of being in a more spacious room).Two experiments assessed the development of children’s part and configural (part-relational) processing in object recognition during adolescence. In total 280 school children aged 7–16 and 56 adults were tested in 3AFC tasks to judge the correct appearance of upright and inverted presented familiar animals, artifacts, and newly learned multi-part objects, which had been manipulated either in terms of individual parts or part relations. Manipulation of part relations was constrained to either metric (animals and artifacts) or categorical (multi-part objects) changes. For animals and artifacts, even the youngest children were close to adult levels for the correct recognition of an individual part change. By contrast, it was not until aged 11–12 that they achieved similar levels of performance with regard to altered metric part relations. For the newly-learned multipart objects, performance for categorical part-specific and part-relational changes was equivalent throughout the tested age range for upright presented stimuli. The results provide converging evidence, with studies of face recognition, for a surprisingly late consolidation of configural-metric relative to part-based object recognition.According to the functional approach to the perception of spatial layout, angular optic variables that indicate extents are scaled to the body and its action capabilities [cf Proffitt, 2006 Perspectives on Psychological Science 1(2) 110–122]. For example, reachable extents are perceived as a proportion of the maximum extent to which one can reach, and the apparent sizes of graspable objects are perceived as a proportion of the maximum extent that one can grasp (Linkenauger et al, 2009 Journal of Experimental Psychology: Human Perceptiion and Performance; 2010 Psychological Science). Therefore, apparent sizes and distances should be influenced by changing scaling aspects of the body. To test this notion, we immersed participants into a full cue virtual environment. Participants’ head, arm and hand movements were tracked and mapped onto a first-person, self-representing avatar in real time. We manipulated the participants’ visual information about their body by changing aspects of the self-avatar (hand size and arm length). Perceptual verbal and action judgments of the sizes and shapes of virtual objects’ (spheres and cubes) varied as a function of the hand/arm scaling factor. These findings provide support for a body-based approach to perception and highlight the impact of self-avatars’ bodily dimensions for users’ perceptions of space in virtual environments.


applied perception in graphics and visualization | 2007

Psychophysics for perception of (in)determinate art

Christian Wallraven; K Kaulard; Cora Kürner; Robert Pepperell; Hh Bülthoff


language resources and evaluation | 2010

The POETICON corpus: Capturing language use and sensorimotor experience in everyday interaction

Katerina Pastra; Christian Wallraven; Michael Schultze; A Vatakis; K Kaulard; Calzolari; Khalid Choukri; Bente Maegaard; Joseph Mariani; J.E.J.M. Odijk; Stelios Piperidis; Mike Rosner; Daniel Tapias


eurographics | 2007

In the eye of the beholder - perception of indeterminate art

Christian Wallraven; K Kaulard; Cora Kürner; Robert Pepperell; Hh Bülthoff


Leonardo | 2008

In the Eye of the Beholder: The Perception of Indeterminate Art

Christian Wallraven; K Kaulard; Cora Kürner; Robert Pepperell


Journal of Vision | 2010

Laying the foundations for an in-depth investigation of the whole space of facial expressions

K Kaulard; Christian Wallraven; Douglas W. Cunningham; Hh Bülthoff


Archive | 2017

Neural processing of facial motion cues about identity and expression

J Schultz; K Kaulard; P Pilz; Katharina Dobs; I Bülthoff; A Fernandez-Cruz; B Brockhaus; Justin L. Gardner; Hh Bülthoff

Collaboration


Dive into the K Kaulard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Pepperell

Cardiff Metropolitan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge