Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cesare Parise is active.

Publication


Featured researches published by Cesare Parise.


Current Biology | 2012

When Correlation Implies Causation in Multisensory Integration

Cesare Parise; Charles Spence; Marc O. Ernst

Inferring which signals have a common underlying cause, and hence should be integrated, represents a primary challenge for a perceptual system dealing with multiple sensory inputs [1-3]. This challenge is often referred to as the correspondence problem or causal inference. Previous research has demonstrated that spatiotemporal cues, along with prior knowledge, are exploited by the human brain to solve this problem [4-9]. Here we explore the role of correlation between the fine temporal structure of auditory and visual signals in causal inference. Specifically, we investigated whether correlated signals are inferred to originate from the same distal event and hence are integrated optimally [10]. In a localization task with visual, auditory, and combined audiovisual targets, the improvement in precision for combined relative to unimodal targets was statistically optimal only when audiovisual signals were correlated. This result demonstrates that humans use the similarity in the temporal structure of multisensory signals to solve the correspondence problem, hence inferring causation from correlation.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Natural auditory scene statistics shapes human spatial hearing

Cesare Parise; Katharina Knorre; Marc O. Ernst

Significance Auditory pitch has an intrinsic spatial connotation: Sounds are high or low, melodies rise and fall, and pitch can ascend and descend. In a wide range of cognitive, perceptual, attentional, and linguistic functions, humans consistently display a positive, sometimes absolute, correspondence between sound frequency and perceived spatial elevation, whereby high frequency is mapped to high elevation. In this paper we show that pitch borrows its spatial connotation from the statistics of natural auditory scenes. This suggests that all such diverse phenomena, such as the convoluted shape of the outer ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation, ultimately reflect adaptation to the statistics of natural auditory scenes. Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing.


Experimental Brain Research | 2012

Audiovisual crossmodal correspondences and sound symbolism: a study using the implicit association test

Cesare Parise; Charles Spence

A growing body of empirical research on the topic of multisensory perception now shows that even non-synaesthetic individuals experience crossmodal correspondences, that is, apparently arbitrary compatibility effects between stimuli in different sensory modalities. In the present study, we replicated a number of classic results from the literature on crossmodal correspondences and highlight the existence of two new crossmodal correspondences using a modified version of the implicit association test (IAT). Given that only a single stimulus was presented on each trial, these results rule out selective attention and multisensory integration as possible mechanisms underlying the reported compatibility effects on speeded performance. The crossmodal correspondences examined in the present study all gave rise to very similar effect sizes, and the compatibility effect had a very rapid onset, thus speaking to the automatic detection of crossmodal correspondences. These results are further discussed in terms of the advantages of the IAT over traditional techniques for assessing the strength and symmetry of various crossmodal correspondences.


Nature Communications | 2016

Correlation detection as a general mechanism for multisensory integration.

Cesare Parise; Marc O. Ernst

The brain efficiently processes multisensory information by selectively combining related signals across the continuous stream of multisensory inputs. To do so, it needs to detect correlation, lag and synchrony across the senses; optimally integrate related information; and dynamically adapt to spatiotemporal conflicts across the senses. Here we show that all these aspects of multisensory perception can be jointly explained by postulating an elementary processing unit akin to the Hassenstein–Reichardt detector—a model originally developed for visual motion perception. This unit, termed the multisensory correlation detector (MCD), integrates related multisensory signals through a set of temporal filters followed by linear combination. Our model can tightly replicate human perception as measured in a series of empirical studies, both novel and previously published. MCDs provide a unified general theory of multisensory processing, which simultaneously explains a wide spectrum of phenomena with a simple, yet physiologically plausible model.


Experimental Brain Research | 2011

Evidence of sound symbolism in simple vocalizations

Cesare Parise; Francesco Pavani

The question of the arbitrariness of language is among the oldest in cognitive sciences, and it relates to the nature of the associations between vocal sounds and their meaning. Growing evidence seems to support sound symbolism, claiming for a naturally constrained mapping of meaning into sounds. Most of such evidence, however, comes from studies based on the interpretation of pseudowords, and to date, there is little empirical evidence that sound symbolism can affect phonatory behavior. In the present study, we asked participants to utter the letter /a/ in response to visual stimuli varying in shape, luminance, and size, and we observed consistent sound symbolic effects on vocalizations. Utterances’ loudness was modulated by stimulus shape and luminance. Moreover, stimulus shape consistently modulated the frequency of the third formant (F3). This finding reveals an automatic mapping of specific visual attributes into phonological features of vocalizations. Furthermore, it suggests that sound-meaning associations are reciprocal, affecting active (production) as well as passive (comprehension) linguistic behavior.


Multisensory Research | 2016

Crossmodal Correspondences: Standing Issues and Experimental Guidelines

Cesare Parise

Crossmodal correspondences refer to the systematic associations often found across seemingly unrelated sensory features from different sensory modalities. Such phenomena constitute a universal trait of multisensory perception even in non-human species, and seem to result, at least in part, from the adaptation of sensory systems to natural scene statistics. Despite recent developments in the study of crossmodal correspondences, there are still a number of standing questions about their definition, their origins, their plasticity, and their underlying computational mechanisms. In this paper, I will review such questions in the light of current research on sensory cue integration, where crossmodal correspondences can be conceptualized in terms of natural mappings across different sensory cues that are present in the environment and learnt by the sensory systems. Finally, I will provide some practical guidelines for the design of experiments that might shed new light on crossmodal correspondences.


I-perception | 2012

The cognitive neuroscience of crossmodal correspondences

Charles Spence; Cesare Parise

In a recent article, N. Bien, S. ten Oever, R. Goebel, and A. T. Sack (2012) used event-related potentials to investigate the consequences of crossmodal correspondences (the “natural” mapping of features, or dimensions, of experience across sensory modalities) on the time course of neural information processing. Then, by selectively lesioning the right intraparietal cortex using transcranial magnetic stimulation, these researchers went on to demonstrate (for the first time) that it is possible to temporarily eliminate the effect of crossmodal congruency on multisensory integration (specifically on the spatial ventriloquism effect). These results are especially exciting given the possibility that the cognitive neuroscience methodology utilized by Bien et al. (2012) holds for dissociating between putatively different kinds of crossmodal correspondence in future research.


Scientific Reports | 2015

Hearing in slow-motion: Humans underestimate the speed of moving sounds

Irene Senna; Cesare Parise; Marc O. Ernst

Perception can often be described as a statistically optimal inference process whereby noisy and incomplete sensory evidence is combined with prior knowledge about natural scene statistics. Previous evidence has shown that humans tend to underestimate the speed of unreliable moving visual stimuli. This finding has been interpreted in terms of a Bayesian prior favoring low speed, given that in natural visual scenes objects are mostly stationary or slowly-moving. Here we investigated whether an analogous tendency to underestimate speed also occurs in audition: even if the statistics of the visual environment seem to favor low speed, the statistics of the stimuli reaching the individual senses may differ across modalities, hence potentially leading to different priors. Here we observed a systematic bias for underestimating the speed of unreliable moving sounds. This finding suggests the existence of a slow-motion prior in audition, analogous to the one previously found in vision. The nervous system might encode the overall statistics of the world, rather than the specific properties of the signals reaching the individual senses.


Multisensory Research | 2016

Understanding the Correspondences: Introduction to the Special Issue on Crossmodal Correspondences.

Cesare Parise; Charles Spence; Ophelia Deroy

Most humans will readily describe auditory pitch spatially, as either high or low (Pratt, 1930; Stumpf, 1883). Likewise, they agree that lemons are fast, while prunes are slow (Woods et al., 2013), and they are certain that even though they have never seen one, an object called ‘takete’ will be spikier than one called ‘maluma’ (Köhler, 1929, 1947; see also Bremner et al., 2013). These various crossings of the senses, which some want to call ‘natural associations’ or ‘metaphorical mappings’ (Evans and Treisman, 2010; Wagner et al., 1981), are increasingly being bundled together under the heading of ‘crossmodal correspondences’, and seen as a hallmark of human cognition and perception (Deroy and Spence, 2013a; Marks, 1978, 1996; Parise and Spence, 2013; Spence, 2011). Over the years, crossmodal correspondences have been consistently found across features and dimensions from all sensory modalities. Some recent studies even appear to suggest that other animals (such as chimpanzees) might experience analogous phenomena (Ludwig et al., 2011). While bearing some superficial similarities to synaesthesia (Cytowic, 1993; Simner and Hubbard, 2013), Deroy and Spence (2013a, in press) have stressed the important differences between these two empirical phenomena. Understanding the sensory correspondences is a pressing challenge for the neurosciences. No wonder, then, that the scientific investigation of crossmodal correspondences is one of the most rapidly expanding fields of study in multisensory research presently. While only a decade ago most of the literature


Experimental Brain Research | 2015

Altered visual feedback modulates cortical excitability in a mirror-box-like paradigm

Irene Senna; Cristina Russo; Cesare Parise; Irene Ferrario; Nadia Bolognini

Abstract Watching self-generated unilateral hand movements reflected in a mirror–oriented along the midsagittal plane–enhances the excitability of the primary motor cortex (M1) ipsilateral to the moving hand of the observer. Mechanisms detecting sensory–motor conflicts generated by the mirror reflection of such movements might mediate this effect; if so, cortical excitability should be modulated by the magnitude of sensory–motor conflict. To this end, we explored the modulatory effects of an altered visual feedback on M1 excitability in a mirror-box-like paradigm, by increasing or decreasing the speed of the observed movement. Healthy subjects performed movements with their left index finger while watching a video of a hand superimposed to their right static hand, which was hidden from view. The hand observed in the video executed the same movement as the observer’s left hand, but at slower, same, or faster paces. Motor evoked potentials (MEPs) induced by transcranial magnetic stimulation were measured from the first dorsal interosseous and the abductor digiti minimi of the participant’s hidden resting hand. The excitability of the M1 ipsilateral to the moving hand was systematically modulated by the speed of the observed hand movement: the slower the observed movement, the greater the MEP amplitude from both muscles. This evidence shows that the magnitude of the visual–motor conflicts can be used to adjust the activity of the observer’s motor system. Hence, an appropriate alteration of the visual feedback, here the reduction in the movement speed, may be useful to increase its modulatory effect on motor cortical excitability.

Collaboration


Dive into the Cesare Parise's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nadia Bolognini

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge