Vincent J. Samar
National Technical Institute for the Deaf
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vincent J. Samar.
Brain and Language | 1999
Vincent J. Samar; Ajit S. Bopardikar; Raghuveer M. Rao; Kenneth P. Swartz
This paper presents a nontechnical, conceptually oriented introduction to wavelet analysis and its application to neuroelectric waveforms such as the EEG and event related potentials (ERP). Wavelet analysis refers to a growing class of signal processing techniques and transforms that use wavelets and wavelet packets to decompose and manipulate time-varying, nonstationary signals. Neuroelectric waveforms fall into this category of signals because they typically have frequency content that varies as a function of time and recording site. Wavelet techniques can optimize the analysis of such signals by providing excellent joint time-frequency resolution. The ability of wavelet analysis to accurately resolve neuroelectric waveforms into specific time and frequency components leads to several analysis applications. Some of these applications are time-varying filtering for denoising single trial ERPs, EEG spike and spindle detection, ERP component separation and measurement, hearing-threshold estimation via auditory brainstem evoked response measurements, isolation of specific EEG and ERP rhythms, scale-specific topographic analysis, and dense-sensor array data compression. The present tutorial describes the basic concepts of wavelet analysis that underlie these and other applications. In addition, the application of a recently developed method of custom designing Meyer wavelets to match the waveshapes of particular neuroelectric waveforms is illustrated. Matched wavelets are physiologically sensible pattern analyzers for EEG and ERP waveforms and their superior performance is illustrated with real data examples.
Brain and Cognition | 1985
Ila Parasnis; Vincent J. Samar
This reaction-time study compared the performance of 20 congenitally and profoundly deaf, and 20 hearing college students on a parafoveal stimulus detection task in which centrally presented prior cues varied in their informativeness about stimulus location. In one condition, subjects detected a parafoveally presented circle with no other information being present in the visual field. In another condition, spatially complex and task-irrelevant foveal information was present which the subjects were instructed to ignore. The results showed that although both deaf and hearing people utilized cues to direct attention to specific locations and had difficulty in ignoring foveal information, deaf people were more proficient in redirecting attention from one spatial location to another in the presence of irrelevant foveal information. These results suggest that differences exist in the development of attentional mechanisms in deaf and hearing people. Both groups showed an overall right visual-field advantage in stimulus detection which was attenuated when the irrelevant foveal information was present. These results suggest a left-hemisphere superiority for detection of parafoveally presented stimuli independent of cue informativeness for both groups.
Brain and Cognition | 1995
Vincent J. Samar; K.P. Swartz; M.R. Raghuveer
Wavelet analysis is presented as a new tool for analyzing event-related potentials (ERPs). The wavelet transform expands ERPs into a time-scale representation, which allows the analyst to zoom in on the small scale, fine structure details of an ERP or zoom out to examine the large scale, global waveshape. The time-scale representation is closely related to the more familiar time-frequency representation used in spectrograms of time-varying signals. However, time-scale representations have special properties that make them attractive for many ERP applications. In particular, time-scale representations permit theoretically unlimited time resolution for the detection of short-lived peaks and permit a flexible choice of wavelet basis functions for analyzing different types of ERPs. Generally, time-scale representations offer a formal basis for designing new, specialized filters for various ERP applications. Among recently explored applications of wavelet analysis to ERPs are (a) the precise identification of the time of occurrence of overlapping peaks in the auditory brainstem evoked response; (b) the extraction of single-trial ERPs from background EEG noise; (c) the decomposition of averaged ERP waveforms into orthogonal detail functions that isolate the waveforms experimental behavior in distinct, orthogonal frequency bands; and (d) the use of wavelet transform coefficients to concisely extract important information from ERPs that predicts human signal detection performance. In this tutorial we present an intuitive introduction to wavelets and the wavelet transform, concentrating on the multiresolution approach to wavelet analysis of ERP data. We then illustrate this approach with real data. Finally, we offer some speculations on future applications of wavelet analysis to ERP data.
Brain and Language | 1999
Tamer Demiralp; Juliana Yordanova; Vasil Kolev; Ahmet Ademoglu; Müge Devrim; Vincent J. Samar
A time-frequency decomposition was applied to the event-related potentials (ERPs) elicited in an auditory oddball condition to assess differences in cognitive information processing. Analysis in the time domain has revealed that cognitive processes are reflected by various ERP components such as N1, P2, N2, P300, and late positive complex. However, the heterogeneous nature of these components has been strongly emphasized due to simultaneously occurring processes. The wavelet transform (WT), which decomposes the signal onto the time-frequency plane, allows the time-dependent and frequency-related information in ERPs to be captured and precisely measured. A four-octave quadratic B-spline wavelet transform was applied to single-sweep ERPs recorded in an auditory oddball paradigm. Frequency components in delta, theta, and alpha ranges reflected specific aspects of cognitive information processing. Furthermore, the temporal position of these components was related to specific cognitive processes.
American Journal of Public Health | 2011
Steven Barnett; Jonathan D. Klein; Robert Q. Pollard; Vincent J. Samar; Deirdre Schlehofer; Matthew Starr; Erika Sutter; Hongmei Yang; Thomas A. Pearson
Deaf people who use American Sign Language (ASL) are medically underserved and often excluded from health research and surveillance. We used a community participatory approach to develop and administer an ASL-accessible health survey. We identified deaf community strengths (e.g., a low prevalence of current smokers) and 3 glaring health inequities: obesity, partner violence, and suicide. This collaborative work represents the first time a deaf community has used its own data to identify health priorities.
Brain and Language | 1986
Vincent J. Samar; Gerald P. Berent
Twenty subjects made lexical decisions in a syntactic priming paradigm. Target stimuli (nouns, verbs, abiguous noun--verb words, and nonsense words) were immediately preceded on each trial by a function word syntactic prime, creating appropriate or inappropriate syntactic contexts for the real word targets. Evoked responses to the targets were recorded from left and right frontal, temporal, and temporoparietal sites. Principal components analysis revealed a component peaking at 140 msec which discriminated words in appropriate contexts from words in inappropriate contexts, independent of lexical syntactic class, with maximal discrimination at temporoparietal sites. Evidence for the identification of the syntactic class of target items was not observed in the evoked response until 80 msec after this syntactic priming effect occurred. These results suggest a prelexical locus for the syntactic priming effect. The implications of these results for current conceptions of the modularity of the mental lexicon are discussed.
Language | 1990
Gerald P. Berent; Vincent J. Samar
This study presents evidence for the psychological reality of the Subset Principle as a determinant of the acquisition of Governing Category Parameter settings for English anaphors and pronominals. We used prelingual deafness as a natural experiment in the acquisition of English in the presence of variably impaired access to English language data. Previous work has shown that deaf adults with low proficiency in English typically know the unmarked properties of English but have specifically less knowledge of the marked properties than more proficient deaf adults. In the present study we demonstrate that deaf adults with relatively low English language proficiency possess the correct governing category for English anaphors but an incorrect, larger governing category for English pronominals. This pattern respects the markedness predictions of the Subset Principle. These individuals appear to select grammars which generate a smaller set of sentences than the grammars of more proficient English language users. The results support the proposal that, when a parameters values determine languages that are ordered as proper subsets, the learner selects the value which determines the smallest language consistent with the input data.*
Brain and Cognition | 1983
Vincent J. Samar
Evoked potentials to laterally presented stimuli were collected from left and right tempero-parietal sites during performance of two visual half-field tasks, lexical decision, and line orientation discrimination. Reaction time and accuracy data were simultaneously collected. The behavioral data indicated the development of a right field advantage for the lexical decision task as a function of practice. A principal components analysis revealed three independent evoked potential components which displayed task-dependent hemispheric asymmetries. Multiple regression analyses revealed that visual half-field asymmetries in response accuracy were closely related to hemispheric asymmetries on several independent evoked response components. Subjects scores on independent tests of verbal reasoning and spatial relations were also found to be closely related to hemispheric asymmetry on several independent evoked response components. These data support a multidimensional concept of cerebral specialization. They also suggest that visual field asymmetries reflect the confluence of several underlying processes which have independent lateralization distributions across the population. In general, the results underscore the need for further research on the nature of the relationship between cerebral and perceptual asymmetries.
Journal of Neuroscience Methods | 2014
Arun Kumar Aniyan; Ninan Sajeeth Philip; Vincent J. Samar; James A. Desjardins; Sidney J. Segalowitz
Event related potentials (ERPs) are very feeble alterations in the ongoing electroencephalogram (EEG) and their detection is a challenging problem. Based on the unique time-based parameters derived from wavelet coefficients and the asymmetry property of wavelets a novel algorithm to separate ERP components in single-trial EEG data is described. Though illustrated as a specific application to N170 ERP detection, the algorithm is a generalized approach that can be easily adapted to isolate different kinds of ERP components. The algorithm detected the N170 ERP component with a high level of accuracy. We demonstrate that the asymmetry method is more accurate than the matching wavelet algorithm and t-CWT method by 48.67 and 8.03 percent, respectively. This paper provides an off-line demonstration of the algorithm and considers issues related to the extension of the algorithm to real-time applications.
Journal of Communication Disorders | 1989
Vincent J. Samar; Dale Evan Metz; Nicholas Schiavetti; Ronald W. Sitler; Robert L. Whitehead
Regression and principal components analyses were employed to study the relationship between 30 aerodynamic speech parameters and the speech intelligibility of 40 severely to profoundly hearing-impaired speakers. Regression analysis on the original 30 aerodynamic variables revealed that speech intelligibility was predicted by a cognate-pair voice onset-time difference measure and a measure of the stability of the volume-velocity rise time. Principal components analysis of the 30 independent variables derived seven factors that accounted for 84.3% of the variance in the original 30 parameters. Subsequent regression analysis using the seven factors as predictor variables revealed four factors with independent relationships to speech intelligibility. These included a factor that reflected cognate-pair voice onset-time distinctions, a factor that reflected cognate-pair peak volume-velocity distinctions, and two other factors, which reflected production stability of temporal distinctions between cognate pair members.