Steven J. Aiken
Dalhousie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven J. Aiken.
Ear and Hearing | 2008
Steven J. Aiken; Terence W. Picton
Objective: To evaluate the response of the human auditory cortex to the temporal amplitude-envelope of speech. Responses to the speech envelope could be useful for validating the neural encoding of intelligible speech, particularly during hearing aid fittings—because hearing aid gain and compression characteristics for ongoing speech should more closely resemble real world performance than for isolated brief syllables. Design: The speech envelope comprises energy changes corresponding to phonemic and syllabic transitions. Envelope frequencies between 2 and 20 Hz are important for speech intelligibility. Human event-related potentials were recorded to six different sentences and the sources of these potentials in the auditory cortex were determined. To improve the signal to noise ratio over ongoing electroencephalographic recordings, we averaged the responses over multiple presentations, and derived source waveforms from multichannel scalp recordings. Source analysis led to bilateral, symmetrical, vertical, and horizontal dipoles in the posterior auditory cortices. The source waveforms were then cross-correlated with the low frequency log-envelopes of the sentences. The significance and latency of the maximum correlation for each sentence demonstrated the presence and latency of the brain’s response. The source waveforms were also cross-correlated with a simple model based on a series of overlapping transient responses to stimulus change (the derivative of the log-envelope). Results: Correlations between the log-envelope and vertical dipole source waveforms were significant for all sentences and for all but one of the participants (mean r = 0.35), at an average delay of 175 (left) to 180 (right) msec. Correlations between the transient response model (P1 at 68 msec, N1 at 124 msec, and P2 at 208 msec) and the vertical dipole source waveforms were detected for all sentences and all participants (mean r = 0.30), at an average delay of 6 (right) to 10 (left) msec. Conclusions: These results show that the human auditory cortex either directly follows the speech envelope or consistently reacts to changes in this envelope. The delay between the envelope and the response is approximately 180 msec.
Frontiers in Neuroscience | 2016
Lijuan Shi; Yin Chang; Xiaowei Li; Steven J. Aiken; Lijie Liu; Jian Wang
Recent evidence has shown that noise-induced damage to the synapse between inner hair cells (IHCs) and type I afferent auditory nerve fibers (ANFs) may occur in the absence of permanent threshold shift (PTS), and that synapses connecting IHCs with low spontaneous rate (SR) ANFs are disproportionately affected. Due to the functional importance of low-SR ANF units for temporal processing and signal coding in noisy backgrounds, deficits in cochlear coding associated with noise-induced damage may result in significant difficulties with temporal processing and hearing in noise (i.e., “hidden hearing loss”). However, significant noise-induced coding deficits have not been reported at the single unit level following the loss of low-SR units. We have found evidence to suggest that some aspects of neural coding are not significantly changed with the initial loss of low-SR ANFs, and that further coding deficits arise in association with the subsequent reestablishment of the synapses. This suggests that synaptopathy in hidden hearing loss may be the result of insufficient repair of disrupted synapses, and not simply due to the loss of low-SR units. These coding deficits include decreases in driven spike rate for intensity coding as well as several aspects of temporal coding: spike latency, peak-to-sustained spike ratio and the recovery of spike rate as a function of click-interval.
Ear and Hearing | 2015
Vijayalakshmi Easwar; David W. Purcell; Steven J. Aiken; Vijay Parsa; Susan Scollie
Objectives: The present study evaluated a novel test paradigm based on speech-evoked envelope following responses (EFRs) as an objective aided outcome measure for individuals fitted with hearing aids. Although intended for use in infants with hearing loss, this study evaluated the paradigm in adults with hearing loss, as a precursor to further evaluation in infants. The test stimulus was a naturally male-spoken token /susa∫i/, modified to enable recording of eight individual EFRs, two from each vowel for different formants and one from each fricative. In experiment I, sensitivity of the paradigm to changes in audibility due to varying stimulus level and use of hearing aids was tested. In experiment II, sensitivity of the paradigm to changes in aided audible bandwidth was evaluated. As well, experiment II aimed to test convergent validity of the EFR paradigm by comparing the effect of bandwidth on EFRs and behavioral outcome measures of hearing aid fitting. Design: Twenty-one adult hearing aid users with mild to moderately severe sensorineural hearing loss participated in the study. To evaluate the effects of level and amplification in experiment I, the stimulus was presented at 50 and 65 dB SPL through an ER-2 insert earphone in unaided conditions and through individually verified hearing aids in aided conditions. Behavioral thresholds of EFR carriers were obtained using an ER-2 insert earphone to estimate sensation level of EFR carriers. To evaluate the effect of aided audible bandwidth in experiment II, EFRs were elicited by /susa∫i/ low-pass filtered at 1, 2, and 4 kHz and presented through the programmed hearing aid. EFRs recorded in the 65 dB SPL aided condition in experiment I represented the full bandwidth condition. EEG was recorded from the vertex to the nape of the neck over 300 sweeps. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple-Stimulus Hidden Reference and Anchor paradigm were measured in the same bandwidth conditions. Results: In experiment I, an increase in stimulus level above threshold and the use of amplification resulted in a significant increase in the number of EFRs detected per condition. At positive sensation levels, an increase in level demonstrated a significant increase in response amplitude in unaided and aided conditions. At 50 and 65 dB SPL, the use of amplification led to a significant increase in response amplitude for the majority of carriers. In experiment II, the number of EFR detections and the combined response amplitude of all eight EFRs improved with an increase in bandwidth up to 4 kHz. In contrast, behavioral measures continued to improve at wider bandwidths. Further change in EFR parameters was possibly limited by the hearing aid bandwidth. Significant positive correlations were found between EFR parameters and behavioral test scores in experiment II. Conclusions: The EFR paradigm demonstrates sensitivity to changes in audibility due to a change in stimulus level, bandwidth, and use of amplification in clinically feasible test times. The paradigm may thus have potential applications as an objective aided outcome measure. Further investigations exploring stimulus–response relationships in aided conditions and validation studies in children are warranted.
Ear and Hearing | 2013
Jong Min Choi; David W. Purcell; Julie-Anne M. Coyne; Steven J. Aiken
Objectives: It would be clinically valuable if an electrophysiological validation of hearing aid effectiveness in conveying speech information could be performed when a device is first provided to the individual after electroacoustic verification. This study evaluated envelope following responses (EFRs) elicited by English vowels in a steady state context and in natural sentences. It was the purpose of this study to determine whether EFRs could be detected rapidly enough to be clinically useful. Design: EFRs were elicited using 5 vowels spanning the English vowel space, /i/, /&egr;/, /æ/, / /, and /u/. These were presented either as concatenated steady state vowels (total duration 10.04 seconds) or in three 5-word sentences (total duration 11.77 seconds), where each vowel appeared once per sentence. Single-channel electroencephalogram was recorded from vertex (Cz) to the nape of the neck for 190 and 160 repetitions of the steady state vowels and sentences, respectively. The stimuli were presented at 70 dBA SPL. The fundamental frequency (f0) track from the stimuli was used with a Fourier analyzer to estimate the EFRs to each vowel. Noise amplitudes were also calculated at neighboring frequencies. Fifteen normal-hearing subjects who were 20 to 34 years of age participated in the experiment. Results: In the analysis of steady state vowels, the mean response amplitude of /i/ was statistically the largest at 173 nV. The other 4 steady state vowels did not differ in mean response amplitude, which varied between 73 and 106 nV. In the analysis of vowels from the 3 sentences, the largest response amplitudes tended to be for /u/. Mean amplitudes for /u/ were 164, 111, and 140 nV for the words “booed,” “food,” and “Sue,” respectively. The vowel /u/ produced statistically larger responses than /i/, /&egr;/, and / / when grouped across words, whereas other vowels did not differ. Mean response amplitudes for the other vowel categories in the sentences varied between 82 and 105 nV. All subjects showed significant EFRs in response to the words “Bee’s” and “booed,” but only 9 subjects showed significant EFRs for “pet,” “bed,” and “Bob.” Conclusions: The authors were readily able to detect significant EFRs elicited by vowels in a steady state context and from 3 natural sentences. These results are promising as an early step in developing a clinical tool for validating that vowel stimuli are at least partially encoded at the level of the auditory brainstem. Future research will require evaluation of the technique with aided listeners, where the natural sentences are expected to be treated as typical speech by hearing aid signal-processing algorithms.
Ear and Hearing | 2013
Steven J. Aiken; Jessica N. Andrus; Manohar Bance; Dennis P. Phillips
Objective: The objective of this study was to examine the role of the acoustic stapedius reflex in the protection of speech recognition from the upward spread of masking arising from low-frequency background noise. Design: Speech recognition scores were measured for nine control participants (19–34 years) and six patients with transected stapedius tendons poststapedotomy (39–57 years) as a function of the amplitude of a low-frequency masker, presented at nominal signal to noise ratios of +5 dB, –5 dB, and –15 dB. All participants had pure-tone hearing thresholds in the normal range. Continuous high-pass noise was present in all conditions to avoid ceiling effects; this reduced performance in quiet to approximately 85% for all participants. Scores were measured for soft and loud nonsense syllables (average third octave band levels of 35 and 65 dB SPL), so that a comparison of the low-frequency noise masking functions at the two levels would provide information about the effects of the reflex on speech intelligibility in noise. A third group of nine control participants (19–22 years) listened in the presence of a low-frequency masker gated to come on 1 sec before stimulus onset, to reduce the likelihood of reflex adaptation. The Speech-Intelligibility Index was used to quantify the amount of speech information available in each condition. Results: Patients with transected tendons performed more poorly than control participants as a function of Speech-Intelligibility Index in all conditions, even at levels that were too soft for reflex activation. This could be because of postsurgical differences in sensitivity, the more advanced age of poststapedotomy group, or differences in medial olivocochlear inhibition. For loud speech, patient performance fell nearly linearly with increases in the low-frequency masker, but control participants’ performance declined little as the signal to noise ratio declined from +5 to –5 dB, and then fell rapidly as the ratio declined to –15 dB. This plateau in the masking function did not occur for the patients. Masking functions obtained with the gated low-frequency masker were either highly similar or poorer to those obtained with a continuous masker, suggesting that the use of a continuous low frequency masker did not result in significant reflex adaptation. Conclusions: The stapedius reflex appears to offer some protection from the upward spread of masking of speech by background low-frequency noise at moderate levels, but not at high levels.
Ear and Hearing | 2015
Vijayalakshmi Easwar; David W. Purcell; Steven J. Aiken; Vijay Parsa; Susan Scollie
Objective: The use of auditory evoked potentials as an objective outcome measure in infants fitted with hearing aids has gained interest in recent years. This article proposes a test paradigm using speech-evoked envelope following responses (EFRs) for use as an objective-aided outcome measure. The method uses a running speech-like, naturally spoken stimulus token /susa∫i/ (fundamental frequency [f0] = 98 Hz; duration 2.05 sec), to elicit EFRs by eight carriers representing low, mid, and high frequencies. Each vowel elicited two EFRs simultaneously, one from the region of formant one (F1) and one from the higher formants region (F2+). The simultaneous recording of two EFRs was enabled by lowering f0 in the region of F1 alone. Fricatives were amplitude modulated to enable recording of EFRs from high-frequency spectral regions. The present study aimed to evaluate the effect of level and bandwidth on speech-evoked EFRs in adults with normal hearing. As well, the study aimed to test convergent validity of the EFR paradigm by comparing it with changes in behavioral tasks due to bandwidth. Design: Single-channel electroencephalogram was recorded from the vertex to the nape of the neck over 300 sweeps in two polarities from 20 young adults with normal hearing. To evaluate the effects of level in experiment I, EFRs were recorded at test levels of 50 and 65 dB SPL. To evaluate the effects of bandwidth in experiment II, EFRs were elicited by /susa∫i/ low-pass filtered at 1, 2, and 4 kHz, presented at 65 dB SPL. The 65 dB SPL condition from experiment I represented the full bandwidth condition. EFRs were averaged across the two polarities and estimated using a Fourier analyzer. An F test was used to determine whether an EFR was detected. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple Stimulus Hidden Reference and Anchors paradigm were measured in identical bandwidth conditions. Results: In experiment I, the increase in level resulted in a significant increase in response amplitudes for all eight carriers (mean increase of 14 to 50 nV) and the number of detections (mean increase of 1.4 detections). In experiment II, an increase in bandwidth resulted in a significant increase in the number of EFRs detected until the low-pass filtered 4 kHz condition and carrier-specific changes in response amplitude until the full bandwidth condition. Scores in both behavioral tasks increased with bandwidth up to the full bandwidth condition. The number of detections and composite amplitude (sum of all eight EFR amplitudes) significantly correlated with changes in behavioral test scores. Conclusions: Results suggest that the EFR paradigm is sensitive to changes in level and audible bandwidth. This may be a useful tool as an objective-aided outcome measure considering its running speech-like stimulus, representation of spectral regions important for speech understanding, level and bandwidth sensitivity, and clinically feasible test times. This paradigm requires further validation in individuals with hearing loss, with and without hearing aids.
Journal of the Acoustical Society of America | 2011
Steven J. Aiken; Kevin LeClair; Michael Kiefte
It has been suggested that auditory‐evoked frequency‐following response (FFR) is related to pitch [Greenberg et al., Hear. Res. 25, 91–114 (1987)], which is usually related to fundamental frequency of complex sounds. We examined FFR to dual‐tone multi‐frequency (DTMF) signals used in telephone communication. As these tones have been used to produce melodies, it was expected that they would have a measurable pitch and this was also determined in a behavioral task. Twelve subjects matched perceived pitch of DTMF signals to a pure tone via method of adjustment. In addition, the FFR to the same signals was recorded from the same subjects to establish whether significant FFR energy was present at the frequency of the perceived pitch. Correlation analysis of data from the behavioral task showed wide variation among subjects, with many identifying the perceived pitch as matching the lower of the two partials. Significant FFR energy was detected at both partials, which does not clarify the relationship between pe...
Neuroscience Letters | 2018
Vijayalakshmi Easwar; Ashlee Banyard; Steven J. Aiken; David W. Purcell
Evoked potentials to envelope periodicity in sounds, such as vowels, are dependent on the stimulus spectrum. We hypothesize that phase differences between responses elicited by multiple frequencies spread tonotopically across the cochlear partition may contribute to variation in scalp-recorded amplitude. The present study evaluated this hypothesis by measuring envelope following responses (EFRs) to two concurrent tone pairs, p1 and p2, that approximated the first and second formant frequencies of a vowel, while controlling their relative envelope phase. We found that the scalp-recorded amplitude of EFRs changed significantly in phase and amplitude when the envelope phase of p2, the higher frequency tone pair, was delayed. The maximum EFR amplitude occurred at the p2 envelope phase delay of 90°, likely because the stimulus delay compensated for the average phase lead of 73.57° exhibited by p2-contributed EFRs relative to p1-contributed EFRs, owing to earlier cochlear processing of higher frequencies. Findings suggest a linear superimposition of independently generated EFRs from tonotopically separated pathways. This suggests that introducing frequency-specific delays may help to optimize EFRs to broadband stimuli like vowels.
Neuroscience | 2018
Hengchao Chen; Lijuan Shi; Lijie Liu; Shankai Yin; Steven J. Aiken; Jian Wang
Noise-induced hidden hearing loss (NIHHL) has attracted great attention in hearing research and clinical audiology since the discovery of significant noise-induced synaptic damage in the absence of permanent threshold shifts (PTS) in animal models. Although the extant evidence for this damage is based on animal models, NIHHL likely occurs in humans as well. This review focuses on three issues concerning NIHHL that are somewhat controversial: (1) whether disrupted synapses can be re-established; (2) whether synaptic damage and repair are responsible for the initial temporal threshold shifts (TTS) and subsequent recovery; and (3) the relationship between the synaptic damage and repair processes and neural coding deficits. We conclude that, after a single, brief noise exposure, (1) the damaged and the totally destroyed synapses can be partially repaired, but the repaired synapses are functionally abnormal; (2) While deficits are observed in some aspects of neural responses related to temporal and intensity coding in the auditory nerve, we did not find strong evidence for hypothesized coding-in-noise deficits; (3) the sensitivity and the usefulness of the envelope following responses to amplitude modulation signals in detecting cochlear synaptopathy is questionable.
European Journal of Neuroscience | 2018
V. Easwar; A. Banyard; Steven J. Aiken; David W. Purcell
Neural encoding of the envelope of sounds like vowels is essential to access temporal information useful for speech recognition. Subcortical responses to envelope periodicity of vowels can be assessed using scalp‐recorded envelope following responses (EFRs); however, the amplitude of EFRs vary by vowel spectra and the causal relationship is not well understood. One cause for spectral dependency could be interactions between responses with different phases, initiated by multiple stimulus frequencies. Phase differences can arise from earlier initiation of processing high frequencies relative to low frequencies in the cochlea. This study investigated the presence of such phase interactions by measuring EFRs to two naturally spoken vowels (/ε/ and /u/), while delaying the envelope phase of the second formant band (F2+) relative to the first formant (F1) band in 45° increments. At 0° F2+ phase delay, EFRs elicited by the vowel /ε/ were lower in amplitude than the EFRs elicited by /u/. Using vector computations, we found that the lower amplitude of /ε/‐EFRs was caused by linear superposition of F1‐ and F2+‐contributions with larger F1‐F2+ phase differences (166°) compared to /u/ (19°). While the variation in amplitude across F2+ phase delays could be modeled with two dominant EFR sources for both vowels, the degree of variation was dependent on F1 and F2+ EFR characteristics. Together, we demonstrate that (a) broadband sounds like vowels elicit independent responses from different stimulus frequencies that may be out‐of‐phase and affect scalp‐based measurements, and (b) delaying higher frequency formants can maximize EFR amplitudes for some vowels.