Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karin Petrini is active.

Publication


Featured researches published by Karin Petrini.


Cognition | 2009

When knowing can replace seeing in audiovisual integration of actions

Karin Petrini; Melanie Russell; Frank E. Pollick

The ability to predict the effects of actions is necessary to behave properly in our physical and social world. Here, we describe how the ability to predict the consequence of complex gestures can change the way we integrate sight and sound when relevant visual information is missing. Six drummers and six novices were asked to judge audiovisual synchrony for drumming point-light displays where the visual information was manipulated to eliminate or include the drumstick-drumhead impact point. In the condition with only the arm information novices were unable to detect asynchrony whereas drummers were able to. Additionally, in the conditions that included the impact point drummers perceived the best alignment when the sight preceded the sound, while in the arm only condition they perceived the best alignment when the sound occurred together with or preceded the sight, as it would be expected if they were predicting the beat occurrence. Taken together these findings suggest that humans can acquire, through practice, internal models of action which can be used to replace missing information when integrating multisensory signals from the environment.


PLOS ONE | 2013

A Psychophysical Investigation of Differences between Synchrony and Temporal Order Judgments.

Scott A. Love; Karin Petrini; Adam Cheng; Frank E. Pollick

Background Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types. Methodology/Principal Findings Participants were presented with 5 audiovisual stimulus types, each at 11 parametrically manipulated levels of cue asynchrony. During separate blocks, participants had to make synchrony judgments or temporal order judgments. For some stimulus types many participants were unable to successfully make temporal order judgments, but they were able to make synchrony judgments. The mean points of subjective simultaneity for synchrony judgments were all video-leading, while those for temporal order judgments were all audio-leading. In the within participants analyses no correlation was found across the two tasks for either the point of subjective simultaneity or the temporal integration window. Conclusions Stimulus type influenced how the two tasks differed; nevertheless, consistent differences were found between the two tasks regardless of stimulus type. Therefore, in line with previous work, we conclude that synchrony and temporal order judgments are supported by different perceptual mechanisms and should not be interpreted as being representative of the same perceptual process.


Journal of Vision | 2010

Expertise with multisensory events eliminates the effect of biological motion rotation on audiovisual synchrony perception

Karin Petrini; Samuel Paul Holt; Frank E. Pollick

Biological motion, in the form of point-light displays, is usually less recognizable and coherent when shown from a less natural orientation, and evidence of this disruption was recently extended to audiovisual aspects of biological motion perception. In the present study, eight drummers and eight musical novices were required to judge either the audiovisual simultaneity or the temporal order of the movements of a drumming point-light display and the resulting sound. The drumming biological motion was presented either in its upright orientation or rotated by 90, 180, or 270 degrees. Our results support and extend previous findings demonstrating that although the rotation of the point-light display affects the audiovisual aspects of biological motion, this effect disappears when experience with the represented multisensory action is increased through practice.


PLOS ONE | 2011

The Music of Your Emotions: Neural Substrates Involved in Detection of Emotional Correspondence between Auditory and Visual Music Actions

Karin Petrini; Frances Crabbe; Carol Sheridan; Frank E. Pollick

In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musicians movements with music), visual (musicians movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musicians movements with mismatching emotional sound) than for emotionally matching music performances (combining the musicians movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.


Developmental Science | 2014

When vision is not an option: children's integration of auditory and haptic information is suboptimal.

Karin Petrini; Alicia Remark; Louise Smith; Marko Nardini

When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In Experiment 2, different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In Experiment 1, adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In Experiment 2, adults and children used similar weighting strategies to solve audio–haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood.


Attention Perception & Psychophysics | 2008

A scaling analysis of the snake lightness illusion

Alexander D. Logvinenko; Karin Petrini; Laurence T. Maloney

Logvinenko and Maloney (2006) measured perceived dissimilarities between achromatic surfaces placed in two scenes illuminated by neutral lights that could differ in intensity. Using a novel scaling method, they found that dissimilarities between light surface pairs could be represented as a weighted linear combination of two dimensions, “surface lightness” (a perceptual correlate of the difference in the logarithm of surface albedo) and “surface brightness” (which corresponded to the differences of the logarithms of light intensity across the scenes). Here we attempt to measure the contributions of these dimensions to a compelling lightness illusion (the “snake illusion”). It is commonly assumed that this illusion is a result of erroneous segmentation of the snake pattern into regions of unequal illumination. We find that the illusory shift in the snake pattern occurs along the surface lightness dimension, with no contribution from surface brightness. Thus, even if an erroneous segmentation of the snake pattern into strips of unequal illumination does happen, it reveals itself, paradoxically, as illusory changes in surface lightness rather than as surface brightness. We conjecture that the illusion strength depends on the balance between two groups of illumination cues signaling the true (uniform) illumination and the pictorial (uneven) illumination.


PLOS ONE | 2015

Visual and Non-Visual Navigation in Blind Patients with a Retinal Prosthesis

Sara Garcia; Karin Petrini; Gary S. Rubin; Lyndon da Cruz; Marko Nardini

Human adults with normal vision can combine visual landmark and non-visual self-motion cues to improve their navigational precision. Here we asked whether blind individuals treated with a retinal prosthesis could also benefit from using the resultant new visual signal together with non-visual information when navigating. Four patients (blind for 15-52 years) implanted with the Argus II retinal prosthesis (Second Sight Medical Products Inc. Sylmar, CA), and five age-matched and six younger controls, participated. Participants completed a path reproduction and a triangle completion navigation task, using either an indirect visual landmark and non-visual self-motion cues or non-visual self-motion cues only. Control participants wore goggles that approximated the field of view and the resolution of the Argus II prosthesis. In both tasks, control participants showed better precision when navigating with reduced vision, compared to without vision. Patients, however, did not show similar improvements when navigating with the prosthesis in the path reproduction task, but two patients did show improvements in the triangle completion task. Additionally, all patients showed greater precision than controls in both tasks when navigating without vision. These results indicate that the Argus II retinal prosthesis may not provide sufficiently reliable visual information to improve the precision of patients on tasks, for which they have learnt to rely on non-visual senses.


COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems | 2011

Effects of experience, training and expertise on multisensory perception: investigating the link between brain and behavior

Scott A. Love; Frank E. Pollick; Karin Petrini

The ability to successfully integrate information from different senses is of paramount importance for perceiving the world and has been shown to change with experience. We first review how experience, in particular musical experience, brings about changes in our ability to fuse together sensory information about the world. We next discuss evidence from drumming studies that demonstrate how the perception of audiovisual synchrony depends on experience. These studies show that drummers are more robust than novices to perturbations of the audiovisual signals and appear to use different neural mechanisms in fusing sight and sound. Finally, we examine how experience influences audiovisual speech perception. We present an experiment investigating how perceiving an unfamiliar language influences judgments of temporal synchrony of the audiovisual speech signal. These results highlight the influence of both the listeners experience with hearing an unfamiliar language as well as the speakers experience with producing non-native words.


Frontiers in Psychology | 2015

Audiovisual integration of emotional signals from others' social interactions

Lukasz Piwek; Frank E. Pollick; Karin Petrini

Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.


Scientific Reports | 2016

How vision and self-motion combine or compete during path reproduction changes with age

Karin Petrini; Andrea Caradonna; Celia Foster; Neil Burgess; Marko Nardini

Human adults can optimally integrate visual and non-visual self-motion cues when navigating, while children up to 8 years old cannot. Whether older children can is unknown, limiting our understanding of how our internal multisensory representation of space develops. Eighteen adults and fifteen 10- to 11-year-old children were guided along a two-legged path in darkness (self-motion only), in a virtual room (visual + self-motion), or were shown a pre-recorded walk in the virtual room while standing still (visual only). Participants then reproduced the path in darkness. We obtained a measure of the dispersion of the end-points (variable error) and of their distances from the correct end point (constant error). Only children reduced their variable error when recalling the path in the visual + self-motion condition, indicating combination of these cues. Adults showed a constant error for the combined condition intermediate to those for single cues, indicative of cue competition, which may explain the lack of near-optimal integration in this group. This suggests that later in childhood humans can gain from optimally integrating spatial cues even when in the same situation these are kept separate in adulthood.

Collaboration


Dive into the Karin Petrini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lukasz Piwek

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Davide Rocchesso

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Haakon Waadeland

Norwegian University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge