Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Olli Santala is active.

Publication


Featured researches published by Olli Santala.


Hearing Research | 2014

Visualization of functional count-comparison-based binaural auditory model output

Marko Takanen; Olli Santala; Ville Pulkki

The count-comparison principle in binaural auditory modeling is based on the assumption that there are nuclei in the mammalian auditory pathway that encode the directional cues in the rate of the output. When this principle is applied, the outputs of the modeled nuclei do not directly result in a topographically organized map of the auditory space that could be monitored as such. Therefore, this article presents a method for visualizing the information from the outputs as well as the nucleus models. The functionality of the auditory model presented here is tested in various binaural listening scenarios, including localization tasks and the discrimination of a target in the presence of distracting sound as well as sound scenarios consisting of multiple simultaneous sound sources. The performance of the model is illustrated with binaural activity maps. The activations seen in the maps are compared to human performance in similar scenarios, and it is shown that the performance of the model is in accordance with the psychoacoustical data.


Journal of the Acoustical Society of America | 2011

Directional perception of distributed sound sources.

Olli Santala; Ville Pulkki

The perception of spatially distributed sound sources was investigated by conducting two listening experiments in anechoic conditions with 13 loudspeakers evenly distributed in the frontal horizontal plane emitting incoherent noise signals. In the first experiment, widely distributed sound sources with gaps in their distribution emitted pink noise. The results indicated that the exact loudspeaker distribution could not be perceived accurately and that the width of the distribution was perceived to be narrower than it was in reality. Up to three spatially distributed loudspeakers that were simultaneously emitting sound could be individually perceived. In addition, the number of loudspeakers that were indicated as emitting sound was smaller than the actual number. In the second experiment, a reference with 13 loudspeakers and test cases with fewer loudspeakers were presented and their perceived spatial difference was rated. The effect of the noise bandwidth was of particular interest. Noise with different bandwidths centered around 500 and 4000 Hz was used. The results indicated that when the number of loudspeakers was increased from four to seven, the perceived auditory event was very similar to that perceived with 13 loudspeakers at all bandwidths. The perceived differences were larger in wideband noise than in narrow-band noise.


Hearing Research | 2015

Integrated processing of spatial cues in human auditory cortex

Nelli H. Salminen; Marko Takanen; Olli Santala; Jarkko Lamminsalo; Alessandro Altoè; Ville Pulkki

Human sound source localization relies on acoustical cues, most importantly, the interaural differences in time and level (ITD and ILD). For reaching a unified representation of auditory space the auditory nervous system needs to combine the information provided by these two cues. In search for such a unified representation, we conducted a magnetoencephalography (MEG) experiment that took advantage of the location-specific adaptation of the auditory cortical N1 response. In general, the attenuation caused by a preceding adaptor sound to the response elicited by a probe depends on their spatial arrangement: if the two sounds coincide, adaptation is stronger than when the locations differ. Here, we presented adaptor-probe pairs that contained different localization cues, for instance, adaptors with ITD and probes with ILD. We found that the adaptation of the N1 amplitude was location-specific across localization cues. This result can be explained by the existence of auditory cortical neurons that are sensitive to sound source location independent on which cue, ITD or ILD, provides the location information. Such neurons would form a cue-independent, unified representation of auditory space in human auditory cortex.


Hearing Research | 2015

Human cortical sensitivity to interaural time difference in high-frequency sounds

Nelli H. Salminen; Alessandro Altoè; Marko Takanen; Olli Santala; Ville Pulkki

Human sound source localization relies on various acoustical cues one of the most important being the interaural time difference (ITD). ITD is best detected in the fine structure of low-frequency sounds but it may also contribute to spatial hearing at higher frequencies if extracted from the sound envelope. The human brain mechanisms related to this envelope ITD cue remain unexplored. Here, we tested the sensitivity of the human auditory cortex to envelope ITD in magnetoencephalography (MEG) recordings. We found two types of sensitivity to envelope ITD. First, the amplitude of the auditory cortical N1m response was smaller for zero envelope ITD than for long envelope ITDs corresponding to the sound being in opposite phase in the two ears. Second, the N1m response amplitude showed ITD-specific adaptation for both fine-structure and for envelope ITD. The auditory cortical sensitivity was weaker for envelope ITD in high-frequency sounds than for fine-structure ITD in low-frequency sounds but occurred within a range of ITDs that are encountered in natural conditions. Finally, the participants were briefly tested for their behavioral ability to detect envelope ITD. Interestingly, we found a correlation between the behavioral performance and the neural sensitivity to envelope ITD. In conclusion, our findings show that the human auditory cortex is sensitive to ITD in the envelope of high-frequency sounds and this sensitivity may have behavioral relevance.


Archive | 2013

Binaural Assessment of Parametrically Coded Spatial Audio Signals

Marko Takanen; Olli Santala; Ville Pulkki

In parametric time-frequency-domain spatial audio techniques, the sound field is encoded as a combination of a few audio channels with metadata. The metadata parametrizes the spatial properties of the sound field that are known to be perceivable to humans. The most well-known techniques are reviewed in this chapter. The spatial artifacts specific to such techniques are described, such as dynamically or statically biased directions, spatially too narrow auditory images, and effects of off-sweet-spot listening. Such cases are analyzed with a binaural auditory model, and it is shown that the artifacts are clearly visualized thereby.


Journal of the Acoustical Society of America | 2016

Auditory localization by subjects with unilateral tinnitus

Petteri Hyvärinen; Catarina Mendonça; Olli Santala; Ville Pulkki; Antti Aarnisalo

Tinnitus is associated with changes in neural activity. How such alterations impact the localization ability of subjects with tinnitus remains largely unexplored. In this study, subjects with self-reported unilateral tinnitus were compared to subjects with matching hearing loss at high frequencies and to normal-hearing subjects in horizontal and vertical plane localization tasks. Subjects were asked to localize a pink noise source either alone or over background noise. Results showed some degree of difference between subjects with tinnitus and subjects with normal hearing in horizontal plane localization, which was exacerbated by background noise. However, this difference could be explained by different hearing sensitivities between groups. In vertical plane localization there was no difference between groups in the binaural listening condition, but in monaural listening the tinnitus group localized significantly worse with the tinnitus ear. This effect remained when accounting for differences in hearing sensitivity. It is concluded that tinnitus may degrade auditory localization ability, but this effect is for the most part due to the associated levels of hearing loss. More detailed studies are needed to fully disentangle the effects of hearing loss and tinnitus.


Journal of the Acoustical Society of America | 2013

Fusion of spatially separated vowel formant cues

Marko Takanen; Tuomo Raitio; Olli Santala; Paavo Alku; Ville Pulkki

Previous studies on fusion in speech perception have demonstrated the ability of the human auditory system to group separate components of speech-like sounds together and consequently to enable the identification of speech despite the spatial separation between the components. Typically, the spatial separation has been implemented using headphone reproduction where the different components evoke auditory images at different lateral positions. In the present study, a multichannel loudspeaker system was used to investigate whether the correct vowel is identified and whether two auditory events are perceived when a noise-excited vowel is divided into two components that are spatially separated. The two components consisted of the even and odd formants. Both the amount of spatial separation between the components and the directions of the components were varied. Neither the spatial separation nor the directions of the components affected the vowel identification. Interestingly, an additional auditory event not associated with any vowel was perceived at the same time when the components were presented symmetrically in front of the listener. In such scenarios, the vowel was perceived from the direction of the odd formant components.


Journal of the Acoustical Society of America | 2013

Effect of spectral overlap on the echo suppression threshold for single reflection conditions

Andreas Walther; Philip W. Robinson; Olli Santala

In performing arts venues, the spectra of direct and reflected sound at a receiving location differ, due to seat dip effect, diffusive and absorptive surfaces, and source directivity. This paper examines the influence of differing lead and lag spectral contents on echo suppression threshold. The results indicate, that for a highpass filtered direct sound and a broadband reflection, attenuation of low frequencies initially results in an increase in echo suppression threshold, while for higher cutoff frequencies echo suppression threshold drastically decreases. For broadband direct sound and filtered reflections, the echo suppression threshold is inversely related to high frequency content.


international conference on acoustics, speech, and signal processing | 2012

On measuring the intelligibility of synthetic speech in noise — Do we need a realistic noise environment?

Tuomo Raitio; Marko Takanen; Olli Santala; Antti Suni; Martti Vainio; Paavo Alku

Assessing the intelligibility of synthetic speech is important in creating synthetic voices to be used in real life applications, especially for the ones involving interfering noise. This raises the question how to measure the intelligibility of synthetic speech to correctly simulate such conditions. Conventionally, this has been done using a simple listening test setup where diotic speech and noise are played to both ears with headphones. This is indeed very different from the real noise environment where speech and noise are spatially distributed. This paper addresses the question whether a realistic noise environment should be used to test the intelligibility of synthetic speech. Three different test conditions, one with multichannel reproduction of noise and speech, and two headphone setups are evaluated. Tests are performed with natural and synthetic speech, including speech especially intended for noisy conditions. The results indicate a general trend in all setups but also some interesting differences.


Journal of the Acoustical Society of America | 2015

Neural realignment of spatially separated sound components

Nelli H. Salminen; Marko Takanen; Olli Santala; Paavo Alku; Ville Pulkki

Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.

Collaboration


Dive into the Olli Santala's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antti Aarnisalo

Helsinki University Central Hospital

View shared research outputs
Top Co-Authors

Avatar

Antti Suni

University of Helsinki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge