Mark VanDam
Washington State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark VanDam.
Ear and Hearing | 2014
Sophie E. Ambrose; Mark VanDam; Mary Pat Moeller
Objectives: The objectives of this study were to examine the quantity of adult words, adult–child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these factors contributed to variability in children’s communication outcomes. Design: Participants were 28 children with mild to severe hearing loss. Full-day recordings of children’s auditory environments were collected within 6 months of their second birthdays by using Language ENvironment Analysis technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children’s communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. Results: On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children’s communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. Conclusions: HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children’s exposure to electronic media.
Ear and Hearing | 2015
Mark VanDam; D. Kimbrough Oller; Sophie E. Ambrose; Sharmistha Gray; Jeffrey A. Richards; Dongxin Xu; Jill Gilkerson; Noah H. Silbert; Mary Pat Moeller
Objectives: This study investigated automatic assessment of vocal development in children with hearing loss compared with children who are typically developing, have language delays, and have autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design: The vocal analysis system analyzed 1913 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results: Samples from children who were hard of hearing patterned more similarly to those of typically developing children than to the language delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions: This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention.
Journal of the Acoustical Society of America | 2014
Mark VanDam
There has been increasing attention in the literature to wearable acoustic recording devices, particularly to examine naturalistic speech in disordered and child populations. Recordings are typically analyzed using automatic procedures that critically depend on the reliability of the collected signal. This work describes the acoustic amplitude response characteristics and the possibility of acoustic transmission loss using several shirts designed for wearable recorders. No difference was observed between the response characteristics of different shirt types or between shirts and the bare-microphone condition. Results are relevant for research, clinical, educational, and home applications in both practical and theoretical terms.
PLOS ONE | 2016
Mark VanDam; Noah H. Silbert
Automatic speech processing (ASP) has recently been applied to very large datasets of naturalistically collected, daylong recordings of child speech via an audio recorder worn by young children. The system developed by the LENA Research Foundation analyzes childrens speech for research and clinical purposes, with special focus on of identifying and tagging family speech dynamics and the at-home acoustic environment from the auditory perspective of the child. A primary issue for researchers, clinicians, and families using the Language ENvironment Analysis (LENA) system is to what degree the segment labels are valid. This classification study evaluates the performance of the computer ASP output against 23 trained human judges who made about 53,000 judgements of classification of segments tagged by the LENA ASP. Results indicate performance consistent with modern ASP such as those using HMM methods, with acoustic characteristics of fundamental frequency and segment duration most important for both human and machine classifications. Results are likely to be important for interpreting and improving ASP output.
Journal of the Acoustical Society of America | 2013
Mark VanDam; Noah H. Silbert
1aSCb1. Precision and error of automatic speech recognition. Mark VanDam* and Noah H. Silbert *Corresponding authors address: Speech and Hearing Sciences, Washington State University, Health Sciences Bldg, Room 125S, Spokane, WA 99202, [email protected] Experienced judges assessed performance of an automatic speech recognition system developed for linguistic exchanges within families in their natural environment. Preliminary results suggest overall good performance with relatively high precision and low error.
Clinical Linguistics & Phonetics | 2011
Mark VanDam; Dana L. Ide‐Helvie; Mary Pat Moeller
Abstract This work investigates the developmental aspects of the duration of point vowels in children with normal hearing compared with those with hearing aids and cochlear implants at 4 and 5 years of age. Younger children produced longer vowels than older children, and children with hearing loss (HL) produced longer and more variable vowels than their normal hearing peers. In this study, children with hearing aids and cochlear implants did not perform differently from each other. Test age and HL did not interact, indicating parallel but delayed development in children with HL compared with their typically developing peers. Variability was found to be concentrated among the high vowels /, / but not in the low vowels /, /. The broad findings of this work are consistent with previous reports and contribute a detailed description of point vowel duration not in the literature.
Journal of the Acoustical Society of America | 2010
Mark VanDam; Mary Pat Moeller; Bruce Tomblin
Technological advances in the last 15 years have resulted in earlier identification of children with mild and moderate hearing loss. Little is known about the impact of early provision of amplification on the development of prosodic speech characteristics such as fundamental frequency (F0). This study aims to address that gap. Children enter this study at 12–36 months of age and contribute 1 whole‐day audio recording each month for one year. The wearable recorder and associated software (LENA Foundation) output (i) a continuous (PCM) audiofile of the whole day and (ii) a time‐aligned, XML‐coded file at millisecond resolution identifying periods of speech (adult female or male, other children, target child) and other acoustic events (overlapping vocals, noise, silence, etc). In this study, children’s F0 is examined directly and in response to certain talkers or in selected turn‐taking relationships (e.g., child‐directed speech, father‐child turn‐taking exchanges). This work includes a detailed methodologic...
Autism Research | 2018
Georgina T.F. Lynch; Stephen James; Mark VanDam
Brain imaging data describe differences in the ASD brain, including amygdala overgrowth, neural interconnectivity, and a three‐phase model of neuroanatomical changes from early post‐natal development through late adolescence. The pupil reflex test (PRT), a noninvasive measure of brain function, may help improve early diagnosis and elucidate underlying physiology in expression of ASD endophenotype. Commonly observed characteristics of ASD include normal visual acuity but difficulty with eye gaze and photosensitivity, suggesting deficient neuromodulation of cranial nerves. Aims of this study were to confirm sensitivity of the PRT for identifying adolescents with ASD, determine if a phenotype for a subtype of ASD marked by pupil response is present in adolescence, and determine whether differences could be observed on a neurologic exam testing cranial nerves II and III (CNII; CNIII). Using pupillometry, constriction latency was measured serving as a proxy for recording neuromodulation of cranial nerves underlying the pupillary reflex. The swinging flashlight method, used to perform the PRT for measuring constriction latency and return to baseline, discriminated ASD participants from typically developing adolescents on 72.2% of trials. Results further confirmed this measures sensitivity within a subtype of ASD in later stages of development, serving as a correlate of neural activity within the locus–coeruleus norepinephrine (LC–NE) system. A brainstem model of atypical PRT in ASD is examined in relation to modulation of cranial nerves and atypical arousal levels subserving the atypical pupillary reflex. Autism Res 2018, 11: 364–375.
genetic and evolutionary computation conference | 2018
Jacob Krantz; Maxwell John Dulin; Paul De Palma; Mark VanDam
Syllables play an important role in speech synthesis, speech recognition, and spoken document retrieval. A novel, low cost, and language agnostic approach to dividing words into their corresponding syllables is presented. A hybrid genetic algorithm constructs a categorization of phones optimized for syllabification. This categorization is used on top of a hidden Markov model sequence classifier to find syllable boundaries. The technique shows promising preliminary results when trained and tested on English words.
Journal of the Acoustical Society of America | 2018
Mark VanDam; Jenna Anderst; Daniel Olds; Allison Saur; Paul De Palma
Linguistic type-frequency, how many different lexical types are used, has been examined in usage-based models of child language acquisition. In general, it has been shown that exposure to greater type-frequencies increases children’s productive use of language and that language in turn bootstraps later development including language and literacy. It is not currently known if pediatric hearing loss impacts the type-frequency of those children’s early communicative productions. In this study, we used a public database available via HomeBank [http://homebank.talkbank.org] to examine the type-frequency in 53 cognitively intact children, 37 with mild to moderate hearing loss (HL) and 16 peers who were typically-developing (TD). For each child, we analyzed 15 minutes of high volubility from a representative daylong recording collected in a natural family setting via an audio recorder worn by the child. Results indicate a main effect of sex favoring girls, but no main effect of HL. There were, however, interacti...