Helen J. Simon
Smith-Kettlewell Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Helen J. Simon.
Journal of Rehabilitation Research and Development | 2006
E. William Yund; Christina M. Roup; Helen J. Simon; Glen A. Bowman
Acclimatization was studied in hearing-impaired patients with no previous hearing aid (HA) experience who were fit bilaterally with either wide dynamic range multichannel compression (WDRMCC) or linear amplification (LA) HAs. Throughout 40 weeks of normal HA use, we monitored changes in nonsense syllable perception in speech-spectrum noise. Syllable recognition for WDRMCC users improved by 4.6% over the first 8 weeks, but the 2.2% improvement for LA users was complete in 2 to 4 weeks. Consonant confusion analyses indicated that WDRMCC experience facilitated consonant identification, while LA users primarily changed their response biases. Furthermore, WDRMCC users showed greater improvement for aided than unaided stimuli, while LA users did not. These results demonstrate acclimatization in new users of WDRMCC HAs but not in new users of LA HAs. A switch in amplification type after 32 weeks produced minimal performance change. Thus, acclimatization depended on the type of amplification and the previous amplification experience.
Journal of Rehabilitation Research and Development | 2005
Helen J. Simon
This article is concerned with the evolution and pros and cons of bilateral amplification. Determining whether a bilateral hearing aid fitting is superior to that of a monaural hearing aid is a long-standing question; for this reason, the trend toward bilateral amplification has been slow. However, it is now assumed that bilateral amplification has significant advantages over monaural amplification in most cases, a view that is supported by our localization results. In this article, we will address the advantages of bilateral hearing aids and reveal some new localization data that show that most listeners with bilateral amplification, when tested unaided, as well as normal-hearing listeners manifested very high degrees of symmetry in their judgments of perceived angle while listeners who routinely use monaural amplification and those with asymmetric hearing loss had relatively large asymmetries. These data show that asymmetry in localization judgments is a much more sensitive indicator of abnormal localization ability than the magnitude of localization errors.
Perception | 2002
Helen J. Simon; Pierre L. Divenyi; Al Lotze
The effects of varying interaural time delay (ITD) and interaural intensity difference (IID) were measured in normal-hearing sighted and congenitally blind subjects as a function of eleven frequencies and at sound pressure levels of 70 and 90 dB, and at a sensation level of 25 dB (sensation level refers to the pressure level of the sound above its threshold for the individual subject). Using an ‘acoustic’ pointing paradigm, the subject varied the IID of a 500 Hz narrow-band (100 Hz) noise (the ‘pointer’) to coincide with the apparent lateral position of a ‘target’ ITD stimulus. ITDs of 0, ±200, and ±400 μs were obtained through total waveform delays of narrow-band noise, including envelope and fine structure. For both groups, the results of this experiment confirm the traditional view of binaural hearing for like stimuli: non-zero ITDs produce little perceived lateral displacement away from 0 IID at frequencies above 1250 Hz. To the extent that greater magnitude of lateralization for a given ITD, presentation level, and center frequency can be equated with superior localization abilities, blind listeners appear at least comparable and even somewhat better than sighted subjects, especially when attending to signals in the periphery. The present findings suggest that blind listeners are fully able to utilize the cues for spatial hearing, and that vision is not a mandatory prerequisite for the calibration of human spatial hearing.
Cognitive Neuropsychology | 2008
Yoram Bonneh; Matthew K. Belmonte; Francesca Pei; Portia E. Iversen; Tal Kenet; Natacha Akshoomoff; Yael Adini; Helen J. Simon; Christopher I. Moore; John F. Houde; Michael M. Merzenich
Anecdotal reports from individuals with autism suggest a loss of awareness to stimuli from one modality in the presence of stimuli from another. Here we document such a case in a detailed study of A.M., a 13-year-old boy with autism in whom significant autistic behaviours are combined with an uneven IQ profile of superior verbal and low performance abilities. Although A.M.s speech is often unintelligible, and his behaviour is dominated by motor stereotypies and impulsivity, he can communicate by typing or pointing independently within a letter board. A series of experiments using simple and highly salient visual, auditory, and tactile stimuli demonstrated a hierarchy of cross-modal extinction, in which auditory information extinguished other modalities at various levels of processing. A.M. also showed deficits in shifting and sustaining attention. These results provide evidence for monochannel perception in autism and suggest a general pattern of winner-takes-all processing in which a stronger stimulus-driven representation dominates behaviour, extinguishing weaker representations.
Journal of the Acoustical Society of America | 1997
Helen J. Simon; Inna Aleksandrovsky
The perceived lateral position of narrow-band noise (NBN) was studied in a graphic pointer task as a function of the method of compensation for interaural threshold asymmetries in hearing-impaired and normal-hearing subjects. The method of compensation consisted of equal sensation level (EqSL) or equal sound-pressure level (EqSPL) at the two ears within the same subject. The NBN signals were presented at 11 center frequencies with interaural intensity differences (IIDs) that varied from -20 to +20 dB. When equalizing by SL, the perceived lateral position is essentially linearly dependent on the degree and direction of asymmetry in asymmetric normal-hearing and hearing-impaired listeners. Equalizing by SPL shows no such dependency but produces images that are lateralized close to the midline. These results reveal that subjects may have adapted to their threshold asymmetries. These results will be discussed in terms of the fitting of binaural hearing aids.
The Hearing journal | 2011
Harry Levitt; Chris Oden; Helen J. Simon; Carla Noack; Al Lotze
Auditory training takes commitment, not just from audiologists who have to work intensively with patients over long periods of time, but also from patients themselves who have to spend hours improving their listening skills. The problem? Many auditory training programs do not live up to this standard, and patients often do not complete the program. Now, though, new computer-based programs are overcoming many of the barriers that have prevented the use of auditory training, reducing dropout rates and improving the effectiveness of auditory training. The key has been to make the process more engaging by using computer programs that provide face-to-face communication in noise while helping users improve speech comprehension skills.
Neuropsychology Review | 2009
Matthew K. Belmonte; Yoram Bonneh; Yael Adini; Portia E. Iversen; Natacha Akshoomoff; Tal Kenet; Christopher I. Moore; Helen J. Simon; John F. Houde; Michael M. Merzenich
Editor: We were surprised to find our case study “Cross-modal extinction in a boy with severely autistic behaviour and high verbal intelligence” (Bonneh et al. 2008) impugned by Professor Waterhouse (2008) as an example of “unsynthesized and hoc theories” cluttering autism research. Our article never purports to constitute a theory in itself, and from its very beginning relates our observations of singlechannel perception to an established theoretical context described in terms of stimulus overselectivity (Lovaas et al. 1979), monotropism (Murray et al. 2005), and impaired attention shifting (Allen and Courchesne 2001). Waterhouse misattributes to us a causal claim that “autism results from monochannel of [sic] winner-takes-all perceptual processing” when in fact all that we claim is that our case results support the existence of such winner-takes-all processing in autism. Our point is not to clutter and to contend with existing theory, but rather to extend such theory to ‘lowfunctioning’ cases in a way that might further the very synthesis to which Waterhouse and we all aspire— hence our unifying theme of perturbed neural interactions (Rubenstein and Merzenich 2003). Though integrative, cooperative autism research has a long way to go still, Waterhouse’s assertion that “the field has not made progress in creating a synthesized, standard predictive causal theory of autism” seems to assume that statements that do not explicitly cite or support each other must necessarily be in conflict and competition. This assumption of irreconcilability is a defeatist fallacy. As Waterhouse herself observes, ideas of weak central coherence (Happe and Frith 2006) and the various takes on abnormal connectivity (Castelli et al. 2002; Just et al. 2004; Neuropsychol Rev (2009) 19:273–274 DOI 10.1007/s11065-009-9099-9
Journal of the Acoustical Society of America | 2008
Helen J. Simon; E. William Yund; Harry Levitt
The question of how well hearing‐impaired individuals can localize sound (with or without amplification) is still not fully resolved. This study was designed to compare sound localization with two types of hearing‐aid (HA) processing, wide dynamic range multichannel compression (WDRMCC) and linear amplification (LA) with compression limiting, during the first 32 weeks of HA use. HAs from two different manufacturers were included to compare different digital signal processing implementations, (1) fast Fourier transform (FFT), necessitating a 10 ms delay, and (2) non‐FFT signal processing with a shorter time delay (1 ms). We found an initial degradation of sound localization, relative to original unaided performance, for both WDRMCC and LA in both FTT and non‐FTT platforms. We found no difference between WDRMCC and LA processing. However, sound localization with non‐FFT platform improved consistently throughout 32 weeks of HA use and was better than the original unaided measurements at 16 and 32 weeks. In ...
Journal of the Acoustical Society of America | 2013
Ender Tekin; James M. Coughlan; Helen J. Simon
In speech perception, the visual information obtained by observing the speaker’s face can account for up to 6 and 10 dB improvements in the presence of wide-band Gaussian and speech-babble noise, respectively. Current hearing aids and other speech enhancement devices do not utilize the visual input from the speakers face, limiting their functionality. To alleviate this shortcoming, audio-visual speech enhancement algorithms have been developed by including video information in the audio processing. We developed an audio-visual voice activity detector (VAD) that combines audio features such as long-term spectral divergence with video features such as spatio-temporal gradients of the mouth area. The contributions of various features are learned by maximizing the mutual information between the audio and video features in an unsupervised fashion. Segmental SNR (SSNR) values were estimated to compare the benefits of audio-visual and conventional audio-only VADs. VAD outputs were utilized by an adaptive Wiener...
Journal of the Acoustical Society of America | 2012
Helen J. Simon; Deborah Gilden; John Brabyn; Al Lotze; Harry Levitt
People with vision loss rely heavily on subtle environmental sound cues for safe and efficient travel (“Wayfinding”). Using our laboratory-developed instruments, the acoustic cues available to blind individuals, with and without hearing loss, during real-life pedestrian travel were recorded. Acoustic signals picked up by electret condenser microphones in the ear canals were fed to a wearable digital audio recorder. Head and body movements were monitored by accelerometers and gyroscopes mounted on the heads and torsos of subjects during typical travel situations such as walking along a corridor with an open doorway. A skilled wayfinder can detect the presence of an open doorway from the acoustic characteristics of the ambient sound field (the “acoustic signature”). The salient characteristics of the acoustic signature when passing an open door were found to be below 1500 Hz. These data confirm previous work regarding ambient room noise near a wall and an opening (Ashmead, 1999). Unlike previous work, the c...