Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pascal Gaillard is active.

Publication


Featured researches published by Pascal Gaillard.


PLOS ONE | 2015

Perception of Everyday Sounds: A Developmental Study of a Free Sorting Task

Aurore Berland; Pascal Gaillard; Michèle Guidetti; Pascal Barone

Objectives The analysis of categorization of everyday sounds is a crucial aspect of the perception of our surrounding world. However, it constitutes a poorly explored domain in developmental studies. The aim of our study was to understand the nature and the logic of the construction of auditory cognitive categories for natural sounds during development. We have developed an original approach based on a free sorting task (FST). Indeed, categorization is fundamental for structuring the world and cognitive skills related to, without having any need of the use of language. Our project explored the ability of children to structure their acoustic world, and to investigate how such structuration matures during normal development. We hypothesized that age affects the listening strategy and the category decision, as well as the number and the content of individual categories. Design Eighty-two French children (6–9 years), 20 teenagers (12–13 years), and 24 young adults participated in the study. Perception and categorization of everyday sounds was assessed based on a FST composed of 18 different sounds belonging to three a priori categories: non-linguistic human vocalizations, environmental sounds, and musical instruments. Results Children listened to the sounds more times than older participants, built significantly more classes than adults, and used a different strategy of classification. We can thus conclude that there is an age effect on how the participants accomplished the task. Analysis of the auditory categorization performed by 6-year-old children showed that this age constitutes a pivotal stage, in agreement with the progressive change from a non-logical reasoning based mainly on perceptive representations to the logical reasoning used by older children. In conclusion, our results suggest that the processing of auditory object categorization develops through different stages, while the intrinsic basis of the classification of sounds is already present in childhood.


Applied Ergonomics | 2015

Sonification of in-vehicle interface reduces gaze movements under dual-task condition

Julien Tardieu; Nicolas Misdariis; Sabine Langlois; Pascal Gaillard; Céline Lemercier

In-car infotainment systems (ICIS) often degrade driving performances since they divert the drivers gaze from the driving scene. Sonification of hierarchical menus (such as those found in most ICIS) is examined in this paper as one possible solution to reduce gaze movements towards the visual display. In a dual-task experiment in the laboratory, 46 participants were requested to prioritize a primary task (a continuous target detection task) and to simultaneously navigate in a realistic mock-up of an ICIS, either sonified or not. Results indicated that sonification significantly increased the time spent looking at the primary task, and significantly decreased the number and the duration of gaze saccades towards the ICIS. In other words, the sonified ICIS could be used nearly exclusively by ear. On the other hand, the reaction times in the primary task were increased in both silent and sonified conditions. This study suggests that sonification of secondary tasks while driving could improve the drivers visual attention of the driving scene.


Hearing Research | 2016

Categorization of common sounds by cochlear implanted and normal hearing adults

E. Collett; Mathieu Marx; Pascal Gaillard; B. Roby; Bernard Fraysse; Olivier Deguine; Pascal Barone

Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging.


Hearing Research | 2018

Categorisation of natural sounds at different stages of auditory recovery in cochlear implant adult deaf patients

Kuzma Strelnikov; Edward Collett; Pascal Gaillard; Eric Truy; Olivier Deguine; Mathieu Marx; Pascal Barone

ABSTRACT Previous studies have demonstrated that cochlear implant (CI) patients are more efficient at performing sound categorisation than sound identification. However, it remains unclear how this categorisation capacity develops with time during the rehabilitation period after implantation. To investigate the role of the post‐implantation auditory experience in the broad sound categorisation in CI patients, we recruited CI patients with different durations of CI experience: Newly implanted CI patients (less than six months), Intermediate CI patients (6–14 months) and Experienced CI patients with a duration of implantation greater than 14 months. The patients completed a Free Sorting Task (FST), which allowed them to categorise 16 natural sounds based on their own criteria. We found an early deficit in categorisation, especially for vocal sounds; the categorisation started to improve after approximately six months post‐implantation with a change of categorisation strategy which relied on different acoustic cues as a function of time after CI. The separation of the category of vocal sounds from other sounds significantly increased between the Newly implanted and Intermediate groups, i.e. as experience with the cochlear implant was acquired. The categorisation accuracy of vocal sounds was significantly correlated with the post‐implantation period only in the group of newly implanted CI patients. This is the first study to show that the categorisation of vocal sounds with respect to non‐vocal sounds improves during the rehabilitation period post‐implantation. The first six‐month post‐implantation period appears to be crucial in this process. Our results demonstrate that patients in different rehabilitation periods use different acoustic cues, which increase their complexity with the CI experience. HighlightsCI users of different duration of implant experience were compared in a Free sorting task.After 6 months, CI users follow similar categorisation strategies to experienced CI patients.After 6 months, CI users are able to discriminate the predefined groups of sounds.CI users categorisation is predominantly based on musical listening mode.Spectral and temporal cues usage evolve post‐implantation.


Journal of Speech Language and Hearing Research | 2017

Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss

Lionel Fontan; Isabelle Ferrané; Jérôme Farinas; Julien Pinquier; Julien Tardieu; Cynthia Magnen; Pascal Gaillard; Xavier Aumont; Christian Füllgrabe

Purpose The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Method Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Results Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Conclusion Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.


Journal of Speech Language and Hearing Research | 2015

Relationship Between Speech Intelligibility and Speech Comprehension in Babble Noise.

Lionel Fontan; Julien Tardieu; Pascal Gaillard; Virginie Woisard; Robert Ruiz


Science and technology for a quiet Europe | 2015

A method to collect representative samples of urban soundscapes

Julien Tardieu; Cynthia Magnen; Marie-Mandarine Colle-Quesada; Nathalie Spanghero-Gaillard; Pascal Gaillard


Percepción y Realidad. Estudios Francofónos, 2007, ISBN 978-84-690-3864-2, págs. 187-195 | 2007

La surdité phonologique illustrée par une étude de catégorisation des voyelles françaises perçues par les hispanophones

Pascal Gaillard; Michel Billières; Cynthia Magnen


Revue parole | 2005

Surdité phonologique et catégorisation: perception des voyelles françaises par les hispanophones

Michel Billières; Pascal Gaillard; Cynthia Magnen


conference of the international speech communication association | 2018

Perceptual and Automatic Evaluations of the Intelligibility of Speech Degraded by Noise Induced Hearing Loss Simulation.

Imed Laaridh; Julien Tardieu; Cynthia Magnen; Pascal Gaillard; Jérôme Farinas; Julien Pinquier

Collaboration


Dive into the Pascal Gaillard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jérôme Farinas

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge