Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessia Tonelli is active.

Publication


Featured researches published by Alessia Tonelli.


Neuroscience & Biobehavioral Reviews | 2016

Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children.

Monica Gori; Giulia Cappagli; Alessia Tonelli; Gabriel Baud-Bovy; Sara Finocchietti

Considering that cortical plasticity is maximal in the child, why are the majority of technological devices available for visually impaired users meant for adults and not for children? Moreover, despite high technological advancements in recent years, why is there still no full user acceptance of existing sensory substitution devices? The goal of this review is to create a link between neuroscientists and engineers by opening a discussion about the direction that the development of technological devices for visually impaired people is taking. Firstly, we review works on spatial and social skills in children with visual impairments, showing that lack of vision is associated with other sensory and motor delays. Secondly, we present some of the technological solutions developed to date for visually impaired people. Doing this, we highlight the core features of these systems and discuss their limits. We also discuss the possible reasons behind the low adaptability in children.


PLOS ONE | 2016

Depth Echolocation Learnt by Novice Sighted People.

Alessia Tonelli; Luca Giulio Brayda; Monica Gori

Some blind people have developed a unique technique, called echolocation, to orient themselves in unknown environments. More specifically, by self-generating a clicking noise with the tongue, echolocators gain knowledge about the external environment by perceiving more detailed object features. It is not clear to date whether sighted individuals can also develop such an extremely useful technique. To investigate this, here we test the ability of novice sighted participants to perform a depth echolocation task. Moreover, in order to evaluate whether the type of room (anechoic or reverberant) and the type of clicking sound (with the tongue or with the hands) influences the learning of this technique, we divided the entire sample into four groups. Half of the participants produced the clicking sound with their tongue, the other half with their hands. Half of the participants performed the task in an anechoic chamber, the other half in a reverberant room. Subjects stood in front of five bars, each of a different size, and at five different distances from the subject. The dimension of the bars ensured a constant subtended angle for the five distances considered. The task was to identify the correct distance of the bar. We found that, even by the second session, the participants were able to judge the correct depth of the bar at a rate greater than chance. Improvements in both precision and accuracy were observed in all experimental sessions. More interestingly, we found significantly better performance in the reverberant room than in the anechoic chamber. The type of clicking did not modulate our results. This suggests that the echolocation technique can also be learned by sighted individuals and that room reverberation can influence this learning process. More generally, this study shows that total loss of sight is not a prerequisite for echolocation skills this suggests important potential implications on rehabilitation settings for persons with residual vision.


Frontiers in Systems Neuroscience | 2015

Task-dependent calibration of auditory spatial perception through environmental visual observation

Alessia Tonelli; Luca Giulio Brayda; Monica Gori

Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.


PLOS ONE | 2017

Intercepting a sound without vision

Tiziana Vercillo; Alessia Tonelli; Monica Gori

Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions.


Cognition | 2018

Early visual deprivation prompts the use of body-centered frames of reference for auditory localization

Tiziana Vercillo; Alessia Tonelli; Monica Gori

The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individuals reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations.


Frontiers in Psychology | 2016

The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons

Alessia Tonelli; Monica Gori; Luca Giulio Brayda

We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.


Scientific Reports | 2018

How body motion influences echolocation while walking

Alessia Tonelli; Claudio Campus; Luca Giulio Brayda

This study investigated the influence of body motion on an echolocation task. We asked a group of blindfolded novice sighted participants to walk along a corridor, made with plastic sound-reflecting panels. By self-generating mouth clicks, the participants attempted to understand some spatial properties of the corridor, i.e. a left turn, a right turn or a dead end. They were asked to explore the corridor and stop whenever they were confident about the corridor shape. Their body motion was captured by a camera system and coded. Most participants were able to accomplish the task with the percentage of correct guesses above the chance level. We found a mutual interaction between some kinematic variables that can lead to optimal echolocation skills. These variables are head motion, accounting for spatial exploration, the motion stop-point of the person and the amount of correct guesses about the spatial structure. The results confirmed that sighted people are able to use self-generated echoes to navigate in a complex environment. The inter-individual variability and the quality of echolocation tasks seems to depend on how and how much the space is explored.


Scientific Reports | 2017

Anticipatory action planning in blind and sighted individuals

Andrea Cavallo; Caterina Ansuini; Monica Gori; Carla Tinti; Alessia Tonelli; Cristina Becchio

Several studies on visually guided reach-to-grasp movements have documented that how objects are grasped differs depending on the actions one intends to perform subsequently. However, no previous study has examined whether this differential grasping may also occur without visual input. In this study, we used motion capture technology to investigate the influence of visual feedback and prior visual experience on the modulation of kinematics by intention in sighted (in both full-vision and no-vision conditions), early-blind and late-blind participants. Results provide evidence of modulation of kinematics by intention to a similar degree under both full-vision and no-vision conditions. Moreover, they demonstrate that prior visual experience has little impact on the tailoring of grasping movements to intention. This suggests that sequential action planning does not depend on visual input, and may instead be ascribed to the function of multisensory-motor cortical network that operates and develops not only in light, but also in darkness.


Journal of the Acoustical Society of America | 2017

Investigate echolocation with non-disabled individuals

Alessia Tonelli; Luca Giulio Brayda; Monica Gori

Vision is the most important sense on the domain of spatial perception. Congenital blind individuals, that cannot rely on vision, show impairments in performing complex spatial auditory tasks. The echolocation technique allows blind people to compensate the audio spatial deficit. Here, we present an overview of our works. First, we show that also sighted people can acquire spatial information through echolocation, i.e., localize an aperture or discriminate the depths of an object locate in front of them. Second, we identified some kinematic variables that can predict the echolocation performance. Third, we show that echolocation, not only helps to understand the external space, but can influence internal models of the body-space relation, such as the peripersonal space (PPS). We discuss all these aspects showing that human beings are sensitive to echoes. Spatial information can be acquired by echolocation when vision is not available also in people that normally would acquire the same information through ...


Journal of the Acoustical Society of America | 2017

Restoring an allocentric reference frame in blind individuals through echolocation

Tiziana Vercillo; Alessia Tonelli; Melvyn A. Goodale; Monica Gori

Recent psychophysical studies have described task-specific auditory spatial deficits in congenitally blind individuals. We investigated auditory spatial perception in congenitally blind children and adults during different auditory spatial tasks that required the localization of brief auditory stimuli with respect to either external acoustic landmarks (allocentric reference frame) or their own body (egocentric reference frame). Early blind participants successfully represented sound locations with respect to their body. However, they showed relative poor precision when compared to sighted participants during the localization of sound with respect to external auditory landmarks, suggesting that vision is crucial for an allocentric representation of the auditory space. In a separate study, we tested three congenitally blind individuals who used echolocation as a navigational strategy, to assess the benefit of echolocation on auditory spatial perception. Blind echolocators did not show the same impairment in...

Collaboration


Dive into the Alessia Tonelli's collaboration.

Top Co-Authors

Avatar

Monica Gori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Luca Giulio Brayda

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Tiziana Vercillo

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Claudio Campus

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caterina Ansuini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Cristina Becchio

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Gabriel Baud-Bovy

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Giulia Cappagli

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Luigi F. Cuturi

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge