Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimiliano Di Luca is active.

Publication


Featured researches published by Massimiliano Di Luca.


PLOS ONE | 2011

The Rubber Hand Illusion: Feeling of Ownership and Proprioceptive Drift Do Not Go Hand in Hand

Marieke Rohde; Massimiliano Di Luca; Marc O. Ernst

In the Rubber Hand Illusion, the feeling of ownership of a rubber hand displaced from a participants real occluded hand is evoked by synchronously stroking both hands with paintbrushes. A change of perceived finger location towards the rubber hand (proprioceptive drift) has been reported to correlate with this illusion. To measure the time course of proprioceptive drift during the Rubber Hand Illusion, we regularly interrupted stroking (performed by robot arms) to measure perceived finger location. Measurements were made by projecting a probe dot into the field of view (using a semi-transparent mirror) and asking participants if the dot is to the left or to the right of their invisible hand (Experiment 1) or to adjust the position of the dot to that of their invisible hand (Experiment 2). We varied both the measurement frequency (every 10 s, 40 s, 120 s) and the mode of stroking (synchronous, asynchronous, just vision). Surprisingly, with frequent measurements, proprioceptive drift occurs not only in the synchronous stroking condition but also in the two control conditions (asynchronous stroking, just vision). Proprioceptive drift in the synchronous stroking condition is never higher than in the just vision condition. Only continuous exposure to asynchronous stroking prevents proprioceptive drift and thus replicates the differences in drift reported in the literature. By contrast, complementary subjective ratings (questionnaire) show that the feeling of ownership requires synchronous stroking and is not present in the asynchronous stroking condition. Thus, subjective ratings and drift are dissociated. We conclude that different mechanisms of multisensory integration are responsible for proprioceptive drift and the feeling of ownership. Proprioceptive drift relies on visuoproprioceptive integration alone, a process that is inhibited by asynchronous stroking, the most common control condition in Rubber Hand Illusion experiments. This dissociation implies that conclusions about feelings of ownership cannot be drawn from measuring proprioceptive drift alone.


Journal of Vision | 2009

Recalibration of multisensory simultaneity: Cross-modal transfer coincides with a change in perceptual latency

Massimiliano Di Luca; Tonja-Katrin Machulla; Marc O. Ernst

After exposure to asynchronous sound and light stimuli, perceived audio-visual synchrony changes to compensate for the asynchrony. Here we investigate to what extent this audio-visual recalibration effect transfers to visual-tactile and audio-tactile simultaneity perception in order to infer the mechanisms responsible for temporal recalibration. Results indicate that audio-visual recalibration of simultaneity can transfer to audio-tactile and visual-tactile stimuli depending on the way in which the multisensory stimuli are presented. With presentation of co-located multisensory stimuli, we found a change in the perceptual latency of the visual stimuli. Presenting auditory stimuli through headphones, on the other hand, induced a change in the perceptual latency of the auditory stimuli. We argue that the difference in transfer depends on the relative trust in the auditory and visual estimates. Interestingly, these findings were confirmed by showing that audio-visual recalibration influences simple reaction time to visual and auditory stimuli. Presenting co-located stimuli during asynchronous exposure induced a change in reaction time to visual stimuli, while with headphones the change in reaction time occurred for the auditory stimuli. These results indicate that the perceptual latency is altered with repeated exposure to asynchronous audio-visual stimuli in order to compensate (at least in part) for the presented asynchrony.


IEEE Transactions on Haptics | 2010

Combination and Integration in the Perception of Visual-Haptic Compliance Information

Martin Kuschel; Massimiliano Di Luca; Martin Buss; Roberta L. Klatzky

The compliance of a material can be conveyed through mechanical interactions in a virtual environment and perceived through both visual and haptic cues. We investigated this basic aspect of perception. In two experiments, subjects performed compliance discriminations, and the mean perceptual estimate (PSE) and the perceptual standard deviation (proportional to JND) were derived from psychophysical functions. Experiment 1 supported a model in which each modality acted independently to produce a compliance estimate, and the two estimates were then integrated to produce an overall value. Experiment 2 tested three mathematical models of the integration process. The data ruled out exclusive reliance on the more reliable modality and stochastic selection of one modality. Instead, the results supported an integration process that constitutes a weighted summation of two random variables, which are defined by the single modality estimates. The model subsumes optimal fusion but provided valid predictions also if the weights were not optimal. Weights were optimal (i.e., minimized variance) when vision and haptic inputs were congruent, but not when they were incongruent.


Journal of Experimental Psychology: Human Perception and Performance | 2011

Audiovisual Asynchrony Detection in Human Speech

Joost X. Maier; Massimiliano Di Luca; Uta Noppeney

Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with similar spectrotemporal structure to investigate to what extent humans have adapted to the specific characteristics of natural audiovisual speech. We manipulated spectrotemporal structure of the auditory signal, stimulus length, and task context. Results indicate that the temporal integration window is narrower and more asymmetric for speech than for nonspeech signals. When perceiving audiovisual speech, subjects tolerate visual leading asynchronies, but are nevertheless very sensitive to auditory leading asynchronies that are less likely to occur in natural speech. Thus, speech perception may be fine-tuned to the natural statistics of audiovisual speech, where facial movements always occur before acoustic speech articulation.


PLOS ONE | 2014

The Duration of Uncertain Times: Audiovisual Information about Intervals Is Integrated in a Statistically Optimal Fashion

Jess Hartcher-O'Brien; Massimiliano Di Luca; Marc O. Ernst

Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme is statistically optimal because it theoretically results in unbiased perceptual estimates with the highest precision possible. There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time. In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimation of audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies – i.e. considering the time of onset/offset of signals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.


Experimental Brain Research | 2012

Multisensory simultaneity recalibration: storage of the aftereffect in the absence of counterevidence

Tonja-Katrin Machulla; Massimiliano Di Luca; Eva Froehlich; Marc O. Ernst

Recent studies show that repeated exposure to an asynchrony between auditory and visual stimuli shifts the point of subjective simultaneity. Usually, the measurement stimuli used to assess this aftereffect are interleaved with short re-exposures to the asynchrony. In a first experiment, we show that the aftereffect declines during measurement in spite of the use of re-exposures. In a second experiment, we investigate whether the observed decline is either due to a dissipation of the aftereffect with the passage of time, or the result of using measurement stimuli with a distribution of asynchronies different from the exposure stimulus. To this end, we introduced a delay before measuring the aftereffects and we compared the magnitude of the aftereffect with and without delay. We find that the aftereffect does not dissipate during the delay but instead is stored until new sensory information in the form of measurement stimuli is presented as counterevidence (i.e., stimuli with an asynchrony that differs from the one used during exposure).


symposium on haptic interfaces for virtual environment and teleoperator systems | 2009

Computationally efficient techniques for data-driven haptic rendering

Raphael Höver; Massimiliano Di Luca; Gábor Székely; Matthias Harders

Data-driven haptic rendering requires processing of raw recorded signals, which leads to high computational effort for large datasets. To achieve real-time performance, one possibility is to reduce the parameter space of the employed interpolation technique, which generally decreases the accuracy in the rendering. In this paper, we propose a method for guiding this parameter reduction to maintain high accuracy with respect to the just noticeable difference for forces. To this end, we performed a user study to estimate this perception threshold. The threshold is used to assess the final error in the rendered forces as well as for the parameter reduction process. Comparison with measured data from real object interactions confirms the accuracy of our method and highlights the reduced computational effort.


tests and proofs | 2010

User-based evaluation of data-driven haptic rendering

Raphael Höver; Massimiliano Di Luca; Matthias Harders

In this article, the data-driven haptic rendering approach presented in our earlier work is assessed. The approach relies on recordings from real objects from which a data-driven model is derived that captures the haptic properties of the object. We conducted two studies. In the first study, the Just Noticeable Difference (JND) for small forces, as encountered in our set-up, was determined. JNDs were obtained both for active and passive user interaction. A conservative threshold curve was derived that was then used to guide the model generation in the second study. The second study examined the achievable rendering fidelity for two objects with different stiffnesses. Subjects directly compared data-driven virtual feedback with the real objects. Results indicated that it is crucial to include dynamic material effects to achieve haptic feedback that cannot be distinguished from real objects. Results also showed that the fidelity is considerably decreased for stiffer objects due to limits of the display hardware.


Scientific Reports | 2016

Optimal perceived timing: integrating sensory information with dynamically updated expectations

Massimiliano Di Luca; Darren Rhodes

The environment has a temporal structure, and knowing when a stimulus will appear translates into increased perceptual performance. Here we investigated how the human brain exploits temporal regularity in stimulus sequences for perception. We find that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted. Stimuli presented earlier than expected are perceptually delayed, whereas stimuli presented on time and later than expected are perceptually accelerated. This result suggests that the brain regularizes slightly deviant stimuli with an asymmetry that leads to the perceptual acceleration of expected stimuli. We present a Bayesian model for the combination of dynamically-updated expectations, in the form of a priori probability of encountering future stimuli, with incoming sensory information. The asymmetries in the results are accounted for by the asymmetries in the distributions involved in the computational process.


Vision Research | 2015

fMRI evidence for areas that process surface gloss in the human visual cortex.

Hua-Chun Sun; Hiroshi Ban; Massimiliano Di Luca; Andrew E. Welchman

Highlights • Glossiness information is mainly processed along ventral visual pathway.• The posterior fusiform sulcus (pFs) is especially selective to surface gloss.• V3B/KO responds to gloss, but differentially from the pFs.

Collaboration


Dive into the Massimiliano Di Luca's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darren Rhodes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Ninja K. Horr

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hua-Chun Sun

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ban

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge