Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Francart is active.

Publication


Featured researches published by Tom Francart.


Journal of Neuroscience Methods | 2008

APEX 3: a multi-purpose test platform for auditory psychophysical experiments

Tom Francart; Astrid Van Wieringen; Jan Wouters

APEX 3 is a software test platform for auditory behavioral experiments. It provides a generic means of setting up experiments without any programming. The supported output devices include sound cards and cochlear implants from Cochlear Corporation and Advanced Bionics Corporation. Many psychophysical procedures are provided and there is an interface to add custom procedures. Plug-in interfaces are provided for data filters and external controllers. APEX 3 is supported under Linux and Windows and is available free of charge.


Audiology and Neuro-otology | 2008

Sensitivity to Interaural Level Difference and Loudness Growth with Bilateral Bimodal Stimulation

Tom Francart; J.P.L. Brokx; Jan Wouters

The interaural level difference (ILD) is an important cue for the localization of sound sources. The sensitivity to ILD was measured in 10 users of a cochlear implant (CI) in one ear and a hearing aid (HA) in the other severely impaired ear. For simultaneous presentation of a pulse train on the CI side and a sinusoid on the HA side the just noticeable difference (JND) in ILD and loudness growth functions were measured. The mean JND for pitch-matched electric and acoustic stimulation was 1.7 dB. A linear fit of the loudness growth functions on a decibel-versus-microampere scale shows that the slope depends on the subject’s dynamic ranges.


Audiology and Neuro-otology | 2009

Bilateral cochlear implants in children: binaural unmasking.

Lieselot Van Deun; Astrid Van Wieringen; Tom Francart; Fanny Scherf; Ingeborg Dhooge; Naima Deggouj; Christian Desloovere; Paul Van de Heyning; F. Erwin Offeciers; Leo De Raeve; Jan Wouters

Bilateral cochlear implants (CIs) may offer deaf children a range of advantages compared to unilateral CIs. However, speech perception in noise is mainly facilitated by better-ear effects and much less by interaural comparisons or true ‘binaural’ hearing. Little is known about the development of the binaural auditory system with CIs provided at a young age. It is possible that, as with adults, binaural sensitivity exists but is not accessed due to technical limitations in electrical stimulation methods. In this paper, we present results on binaural hearing in children with bilateral CIs. Binaural masking level differences (BMLDs) were measured for a 180-degree phase shift in a 125-Hz sinusoid, presented in a 50-Hz-wide noise band and modulating a 1000-pps carrier pulse train. Stimuli were presented to a single electrode in the middle of the electrode array at both ears. Eight children between 6 and 15 years of age participated in this study. Six children had a significantly better detection threshold when the signal was out of phase (dichotic) between two ears than when it was in phase (diotic), with a mean difference (BMLD) of 6.4 dB. The present results show that children with bilateral CIs are sensitive to binaural cues in electrical stimuli, similar to adults, even when implants are provided at a later age and with a longer delay between implantations.


Journal of the Acoustical Society of America | 2007

Perception of across-frequency interaural level differences

Tom Francart; Jan Wouters

The interaural level difference (ILD) is an important cue for the localization of sound sources. Just noticeable differences (JND) in ILD were measured in 12 normal hearing subjects for uncorrelated noise bands with a bandwidth of 13 octave and a different center frequency in both ears. In one ear the center frequency was either 250, 500, 1000, or 4000 Hz. In the other ear, a frequency shift of 0, 16, 13, or 1 octave was introduced. JNDs in ILD for unshifted, uncorrelated noise bands of 13 octave width were 2.6, 2.6, 2.5, and 1.4 dB for 250, 500, 1000, and 4000 Hz, respectively. Averaged over all shifts, JNDs decreased significantly with increasing frequency. For the shifted conditions, JNDs increased significantly with increasing shift. Performance on average worsened by 0.5, 0.9, and 1.5 dB for shifts of 16, 13, and 1 octave. Though performance decreases, the just noticeable ILDs for the shifted conditions were still in a range usable for lateralization. This has implications for signal processing algorithms for bilateral bimodal hearing instruments and the fitting of bilateral cochlear implants.


International Journal of Audiology | 2011

Comparison of fluctuating maskers for speech recognition tests

Tom Francart; Astrid Van Wieringen; Jan Wouters

Abstract Objective: To investigate the extent to which temporal gaps, temporal fine structure, and comprehensibility of the masker affect masking strength in speech recognition experiments. Design: Seven different masker types with Dutch speech materials were evaluated. Amongst these maskers were the ICRA-5 fluctuating noise, the international speech test signal (ISTS), and competing talkers in Dutch and Swedish. Study Sample: Normal-hearing and hearing-impaired subjects. Results: The normal-hearing subjects benefited from both temporal gaps and temporal fine structure in the fluctuating maskers. When the competing talker was comprehensible, performance decreased. The ISTS masker appeared to cause a large informational masking component. The stationary maskers yielded the steepest slopes of the psychometric function, followed by the modulated noises, followed by the competing talkers. Although the hearing-impaired group was heterogeneous, their data showed similar tendencies, but sometimes to a lesser extent, depending on individuals’ hearing impairment. Conclusions: If measurement time is of primary concern non-modulated maskers are advised. If it is useful to assess release of masking by the use of temporal gaps, a fluctuating noise is advised. If perception of temporal fine structure is being investigated, a foreign-language competing talker is advised. Sumario Objetivo: Investigar hasta dónde las brechas temporales y las finas estructuras temporales afectan la potencia del enmascaramiento en los experimentos de reconocimiento del lenguaje. Diseño: Se evaluaron siete tipos diferentes de enmascaradores con materiales de lenguaje en Holandés. Entre estos enmascaradores estuvieron el ICRA-5 de ruido fluctuante, la señal internacional de prueba de lenguaje (ISTS) y los materiales competitivos en Holandés y Sueco. Muestra De Estudio: Sujetos normoyentes y con pérdidas auditivas. Resultados: Los normoyentes se beneficiaron tanto de las brechas temporales como de las finas estructuras temporales, con los enmascaradores fluctuantes. Cuando fue comprensible el mensaje hablado competitivo, el rendimiento disminuyó. El enmascarador ISTS apareció como causante de un más amplio componente de enmascaramiento informacional. Los enmascaradores estacionarios produjeron los más bruscos gradientes de la función psicométrica, seguidos por los ruidos modulados y después por los mensajes hablados competitivos. A pesar de que el grupo de hipoacúsicos fue heterogéneo, sus datos mostraron tendencias similares pero algunas veces en un menor grado, dependiendo de los impedimentos auditivos individuales. Conclusiones: Si la medición del tiempo es la preocupación primordial, se aconseja el uso de enmascaradores modulados. Si lo útil es evaluar la liberación del enmascaramiento por el uso de brechas temporales, se aconseja el uso de ruido fluctuante. Si lo que se investiga es la percepción de estructuras temporales finas, se aconseja el uso de mensajes hablados competitivos en lengua extranjera.


Ear and Hearing | 2013

Psychophysics, fitting, and signal processing for combined hearing aid and cochlear implant stimulation.

Tom Francart; Hugh J. McDermott

The addition of acoustic stimulation to electric stimulation via a cochlear implant has been shown to be advantageous for speech perception in noise, sound quality, music perception, and sound source localization. However, the signal processing and fitting procedures of current cochlear implants and hearing aids were developed independently, precluding several potential advantages of bimodal stimulation, such as improved sound source localization and binaural unmasking of speech in noise. While there is a large and increasing population of implantees who use a hearing aid, there are currently no generally accepted fitting methods for this configuration. It is not practical to fit current commercial devices to achieve optimal binaural loudness balance or optimal binaural cue transmission for arbitrary signals and levels. There are several promising experimental signal processing systems specifically designed for bimodal stimulation. In this article, basic psychophysical studies with electric acoustic stimulation are reviewed, along with the current state of the art in fitting, and experimental signal processing techniques for electric acoustic stimulation.


Journal of the Acoustical Society of America | 2011

Enhancement of interaural level differences improves sound localization in bimodal hearing

Tom Francart; Anna Maria Johannes Lenssen; Jan Wouters

Users of a cochlear implant together with a contralateral hearing aid-so-called bimodal listeners-have difficulties with localizing sound sources. This is mainly due to the distortion of interaural time and level difference cues (ITD and ILD), and limited ITD sensitivity. An algorithm is presented that enhances ILD cues. Horizontal plane sound-source localization performance of six bimodal listeners was evaluated in (1) a real sound field with their clinical devices, (2) in a virtual sound field, under direct computer control, and (3) in a virtual sound field with ILD enhancement. The results in the real sound field did not differ significantly from the results in the virtual field, and ILD enhancement improved localization performance by 4°-10° absolute error, relative to a mean absolute error of 28° in the condition without ILD enhancement.


Audiology and Neuro-otology | 2011

Sensitivity of Bimodal Listeners to Interaural Time Differences with Modulated Single- and Multiple-Channel Stimuli

Tom Francart; Anneke Lenssen; Jan Wouters

In a previous study, it was shown that users of a cochlear implant and a contralateral hearing aid are sensitive to interaural time differences (ITDs). In the current study, we investigated (1) the influence on ITD sensitivity of bilaterally varying the place of excitation in the cochlea and of modulation frequency, and (2) the sensitivity to ITD with a 3-channel stimulus generated using continuous-interleaved-sampling (CIS)-like processing. The stimuli were (1) a high-frequency carrier (acoustic sinusoid and single-electrode electric pulse train), modulated with a half-wave-rectified low-frequency sinusoid (a so-called transposed stimulus), and (2) a 3-channel stimulus, generated by sending an acoustic click train through processing similar to the CIS strategy. Four bimodal listeners were sensitive to ITD for both stimulus types. For the first stimulus type, there was no significant influence on ITD sensitivity of the acoustic carrier frequency. Performance decreased with increasing modulation frequency with a limit of sensitivity at around 150–200 Hz. Sensitivity was similar for the single- and 3-channel stimulus. The results indicate the possibility of ITD perception with adapted clinical processors, which can lead to improved sound source localization and binaural unmasking.


IEEE Signal Processing Magazine | 2015

Sound Coding in Cochlear Implants: From electric pulses to hearing

Jan Wouters; Hugh J. McDermott; Tom Francart

Cochlear implantation is a life-changing intervention for people with a severe hearing impairment [1]. For most cochlear implant (CI) users, speech intelligibility is satisfactory in quiet environments. Although modern CIs provide up to 22 stimulation channels, information transfer is still limited for the perception of fine spectrotemporal details in many types of sound. These details contribute to the perception of music and speech in common listening situations, such as where background noise is present. Over the past several decades, many different sound processing strategies have been developed to provide more details about acoustic signals to CI users. In this article, progress in sound coding for CIs is reviewed. Starting from a basic strategy, the current commercially most-used signal processing schemes are discussed, as well as recent developments in coding strategies that aim to improve auditory perception. This article focuses particularly on the stimulation strategies, which convert sound signals into patterns of nerve stimulation. The neurophysiological rationale behind some of these strategies is discussed and aspects of CI performance that require further improvement are identified.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017

Auditory-Inspired Speech Envelope Extraction Methods for Improved EEG-Based Auditory Attention Detection in a Cocktail Party Scenario

Wouter Biesmans; Neetha Das; Tom Francart; Alexander Bertrand

This paper considers the auditory attention detection (AAD) paradigm, where the goal is to determine which of two simultaneous speakers a person is attending to. The paradigm relies on recordings of the listener’s brain activity, e.g., from electroencephalography (EEG). To perform AAD, decoded EEG signals are typically correlated with the temporal envelopes of the speech signals of the separate speakers. In this paper, we study how the inclusion of various degrees of auditory modelling in this speech envelope extraction process affects the AAD performance, where the best performance is found for an auditory-inspired linear filter bank followed by power law compression. These two modelling stages are computationally cheap, which is important for implementation in wearable devices, such as future neuro-steered auditory prostheses. We also introduce a more natural way to combine recordings (over trials and subjects) to train the decoder, which reduces the dependence of the algorithm on regularization parameters. Finally, we investigate the simultaneous design of the EEG decoder and the audio subband envelope recombination weights vector using either a norm-constrained least squares or a canonical correlation analysis, but conclude that this increases computational complexity without improving AAD performance.

Collaboration


Dive into the Tom Francart's collaboration.

Researchain Logo
Decentralizing Knowledge