Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lynne E. Bernstein is active.

Publication


Featured researches published by Lynne E. Bernstein.


Attention Perception & Psychophysics | 2000

Speech perception without hearing.

Lynne E. Bernstein; Paula E. Tucker; Marilyn E. Demorest

In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH;n=96) and with severely to profoundly impaired hearing (IH;n=72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.


ieee virtual reality conference | 1993

OMAR a haptic display for speech perception by deaf and deaf-blind individuals

Silvio P. Eberhardt; Lynne E. Bernstein; David C. Coulter; Laura A. Hunckler

A system for haptic (i.e. kinesthetic and cutaneous) stimulation of the hand is described. While the immediate application involves display of speech information, a number of other man-machine interface applications may be feasible, including force-feedback devices for computer interaction and human pattern extraction from multiple datastreams. In an attempt to model more closely the information streams available via the Tadoma method, OMAR was developed to stimulate kinesthetic as well as tactile receptors, by moving and vibrating fingers in one or two dimensions using hard-disk head-positioning actuators. OMAR is being used in experiments involving basic haptic perception and supplementation of speechreading with haptic codings of speech correlates obtained via X-ray microbeam measurements.<<ETX>>


Speech Communication | 2004

Auditory speech detection in noise enhanced by lipreading

Lynne E. Bernstein; Edward T. Auer; Sumiko Takayanagi

Abstract Audiovisual speech stimuli have been shown to produce a variety of perceptual phenomena. Enhanced detectability of acoustic speech in noise, when the talker can also be seen, is one of those phenomena. This study investigated whether this enhancement effect is specific to visual speech stimuli or can rely on more generic non-speech visual stimulus properties. Speech detection thresholds for an auditory /ba/ stimulus were obtained in a white noise masker. The auditory /ba/ was presented adaptively to obtain its 79.4% detection threshold under five conditions. In Experiment 1, the syllable was presented (1) auditory-only (AO) and (2) as audiovisual speech (AVS), using the original video recording. Three types of synthetic visual stimuli were also paired synchronously with the audio token: (3) A dynamic Lissajous (AVL) figure whose vertical extent was correlated with the acoustic speech envelope; (4) a dynamic rectangle (AVR) whose horizontal extent was correlated with the speech envelope; and (5) a static rectangle (AVSR) whose onset and offset were synchronous with the acoustic speech onset and offset. Ten adults with normal hearing and vision participated. The results, in terms of dB signal-to-noise ratio (SNR), were AVS


Neuroreport | 2002

Visual speech perception without primary auditory cortex activation

Lynne E. Bernstein; Jean K. Moore; Curtis W. Ponton; Manual Don; Manbir Singh

Speech perception is conventionally thought to be an auditory function, but humans often use their eyes to perceive speech. We investigated whether visual speech perception depends on processing by the primary auditory cortex in hearing adults. In a functional magnetic resonance imaging experiment, a pulse-tone was presented contrasted with gradient noise. During the same session, a silent video of a talker saying isolated words was presented contrasted with a still face. Visual speech activated the superior temporal gyrus anterior, posterior, and lateral to the primary auditory cortex, but not the region of the primary auditory cortex. These results suggest that visual speech perception is not critically dependent on the region of primary auditory cortex.


Annals of Dyslexia | 1984

Four-Year Follow-Up Study of Language Impaired Children.

Rachel E. Stark; Lynne E. Bernstein; Rosemary Condino; Michael Bender; Paula Tallal; Hugh W. Catts

Children identified as normal or as specifically language impaired (SLI) were given speech, language, and intelligence testing on a longitudinal basis. Fourteen normal and 29 SLI children between the ages of 4 1/2 and 8 years were tested at Time 1. They were retested three to four years later when they were 8 to 12 years old. The results indicated that both the normal and the SLI children continued to develop skills in receptive and expressive language and speech articulation across the 3- to 4-year period intervening between evaluations. Overall, however, the SLI children appeared to develop language skills at a slower than normal rate and 80% of them remained language impaired at Time 2. In addition, the majority of the SLI children manifested reading impairment at Time 2, while none of the normal children did so. The implications for the educational management of SLI children are discussed.


EURASIP Journal on Advances in Signal Processing | 2002

On the Relationship between Face Movements, Tongue Movements, and Speech Acoustics

Jintao Jiang; Abeer Alwan; Patricia A. Keating; Lynne E. Bernstein

This study examines relationships between external face movements, tongue movements, and speech acoustics for consonant-vowel (CV) syllables and sentences spoken by two male and two female talkers with different visual intelligibility ratings. The questions addressed are how relationships among measures vary by syllable, whether talkers who are more intelligible produce greater optical evidence of tongue movements, and how the results for CVs compared to those for sentences. Results show that the prediction of one data stream from another is better for C/a/ syllables than C/i/ and C/u/ syllables. Across the different places of articulation, lingual places result in better predictions of one data stream from another than do bilabial and glottal places. Results vary from talker to talker; interestingly, high rated intelligibility do not result in high predictions. In general, predictions for CV syllables are better than those for sentences.


Neuroreport | 2007

Vibrotactile activation of the auditory cortices in deaf versus hearing adults.

Lynne E. Bernstein; Witaya Sungkarat; Manbir Singh

Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during functional MRI. Vibration stimuli were derived from speech or were a fixed frequency. Higher, more widespread activity was observed within auditory cortical regions of the deaf participants for both stimulus types. Life-long somatosensory stimulation due to hearing aid use could explain the greater activity observed with deaf participants.


NeuroImage | 2010

Comparison of landmark-based and automatic methods for cortical surface registration

Dimitrios Pantazis; Anand A. Joshi; Jintao Jiang; David W. Shattuck; Lynne E. Bernstein; Hanna Damasio; Richard M. Leahy

Group analysis of structure or function in cerebral cortex typically involves, as a first step, the alignment of cortices. A surface-based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark-based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis.


Journal of the Acoustical Society of America | 1997

Speechreading and the structure of the lexicon: Computationally modeling the effects of reduced phonetic distinctiveness on lexical uniqueness

Edward T. Auer; Lynne E. Bernstein

A lexical modeling methodology was employed to examine how the distribution of phonemic patterns in the lexicon constrains lexical equivalence under conditions of reduced phonetic distinctiveness experienced by speech-readers. The technique involved (1) selection of a phonemically transcribed machine-readable lexical database, (2) definition of transcription rules based on measures of phonetic similarity, (3) application of the transcription rules to a lexical database and formation of lexical equivalence classes, and (4) computation of three metrics to examine the transcribed lexicon. The metric percent words unique demonstrated that the distribution of words in the language substantially preserves lexical uniqueness across a wide range in the number of potentially available phonemic distinctions. Expected class size demonstrated that if at least 12 phonemic equivalence classes were available, any given word would be highly similar to only a few other words. Percent information extracted (PIE) [D. Carter, Comput. Speech Lang. 2, 1-11 (1987)] provided evidence that high-frequency words tend not to reside in the same lexical equivalence classes as other high-frequency words. The steepness of the functions obtained for each metric shows that small increments in the number of visually perceptible phonemic distinctions can result in substantial changes in lexical uniqueness.


Journal of the Acoustical Society of America | 1989

Single‐channel vibrotactile supplements to visual perception of intonation and stress

Lynne E. Bernstein; Silvio P. Eberhardt; Marilyn E. Demorest

Two experiments were conducted to explore the effectiveness of a single vibrotactile stimulator to convey intonation (question versus statement) and contrastive stress (on one of the first three words of four 4- or 5-word sentences). In experiment I, artificially deafened normal-hearing subjects judged stress and intonation in counterbalanced visual-alone and visual-tactile conditions. Six voice fundamental frequency-to-tactile transformations were tested. Two sentence types were voiced throughout, and two contained unvoiced consonants. Benefits to speechreading were significant, but small. No differences among transformations were observed. In experiment II, only the tactile stimuli were presented. Significant differences emerged among the transformations, with larger differences for intonation than for stress judgments. Surprisingly, tactile-alone intonation identification was more accurate than visual-tactile for several transformations.

Collaboration


Dive into the Lynne E. Bernstein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Silvio P. Eberhardt

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abeer Alwan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianxia Xue

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge