Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Hudock is active.

Publication


Featured researches published by Daniel Hudock.


International Journal of Language & Communication Disorders | 2010

Stuttering inhibition via visual feedback at normal and fast speech rates

Daniel Hudock; Vikram N. Dayalu; Tim Saltuklaroglu; Andrew Stuart; Jianliang Zhang; Joseph Kalinowski

BACKGROUND Immediate and drastic reductions in stuttering are found when speech is produced in conjunction with a variety of second signals (for example, auditory choral speech and its permutations, and delayed auditory feedback). Initially, researchers suggested a decreased speech rate as a plausible explanation for the reduction in stuttering as people who stutter produced speech under second signals. However, this explanation was refuted by research findings that demonstrated reductions in stuttering at both normal and fast speech rates under second signals. Recent studies have also demonstrated significant reductions in stuttering from second signals delivered via the visual modality. However, the question as to whether stuttering can be substantially reduced at normal and fast speech rates under visual speech feedback conditions has yet to be answered. AIMS The current study investigated stuttering frequency reduction at normal and fast speech rates across different visual speech feedback conditions relative to a no-visual feedback condition. METHODS & PROCEDURES Ten adults who stutter recited memorized tokens of eight to 13 syllables under five visual speech feedback conditions at both normal and fast speech rates. Visual speech feedback conditions consisted of participants viewing the lower portion of their face (that is, lips, jaw, and base of the nose) on a monitor as they produced the aforementioned utterances. Conditions consisted of (1) no-visual feedback condition, (2) 0 ms (simultaneous visual speech feedback), (2) a 50-ms delay, (3) a 200-ms delay, and (4) a 400-ms delay. OUTCOMES & RESULTS A significant main effect of visual speech feedback on stuttering frequency was found (p= 0.001) with no significant main effect of speech rate or the interaction between speech rate and visual speech feedback. Relative to the no-visual feedback condition, the feedback conditions produced reductions in stuttering ranging from 27% (0 ms) to 62% (400 ms). Post-hoc comparisons revealed that all of the delay conditions differed significantly from the simultaneous feedback (p= 0.017) and the no-visual feedback conditions (p= 0.0002) while no significant differences between delay conditions (that is, 50, 200, and 400 ms) were observed. CONCLUSIONS & IMPLICATIONS The current findings demonstrate the capabilities of visual speech feedback signals to reduce stuttering frequency that is independent of the speakers rate of speech. Possible strategies are suggested to transfer these findings into naturalistic and clinical settings, though further research is warranted.


International Journal of Language & Communication Disorders | 2014

Stuttering inhibition via altered auditory feedback during scripted telephone conversations

Daniel Hudock; Joseph Kalinowski

BACKGROUND Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback (DAF) and frequency-altered feedback (FAF) have been shown to reduce stuttering by approximately 57% during scripted telephone conversations. AIMS To examine stuttering frequency under monaural auditory feedback with one combination of DAF with FAF (COMBO-2) and two combinations of DAF with FAF (COMBO-4) during scripted telephone conversations. METHODS & PROCEDURES Nine adult participants who stutter called 15 local businesses during scripted telephone conversations; each condition consisted of five randomized telephone calls. Conditions consisted of (1) baseline (i.e. non-altered feedback), (2) COMBO-2 (i.e. 50-ms delay with a half octave spectral shift up), and (3) COMBO-4 (i.e. 200-ms delay and a half octave spectral shift down in addition to the COMBO-2). Participants wore a supra-aural headset with a dynamic condenser microphone while holding a receiver to their contralateral ear when making telephone calls. OUTCOMES & RESULTS Stuttering was significantly reduced during both altered auditory feedback (AAF) conditions by approximately 65%. Furthermore, a greater reduction in stuttering was revealed during the COMBO with four effects (74%) as compared with the COMBO with two effects (63%). CONCLUSIONS & IMPLICATIONS Results from the current study support prior research reporting decreased stuttering under AAF during scripted telephone conversations. Findings that stuttering was significantly reduced to a greater extent under the COMBO with four effects condition suggest that second signals reduce stuttering along a continuum. Additionally, findings support prior research results of decreased stuttering frequency under AAF during hierarchically difficult speaking situations. Clinical application of these findings may be that people who stutter can use specific software or smartphone applications that produce second speech signals to inhibit stuttering frequency effectively during telephone conversations.


Frontiers in Psychology | 2014

Hearing impairment and audiovisual speech integration ability: a case study report

Nicholas Altieri; Daniel Hudock

Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listeners ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: an open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.


International Journal of Audiology | 2014

Assessing variability in audiovisual speech integration skills using capacity and accuracy measures

Nicholas Altieri; Daniel Hudock

Abstract Objective: While most normal-hearing listeners rely on the auditory modality to obtain speech information, research has demonstrated the importance that non-auditory modalities have on language recognition during face-to-face communication. The efficient utilization of the visual modality becomes increasingly important in difficult listening conditions, and especially for older and hearing-impaired listeners with sensory or cognitive decline. First, this report will quantify audiovisual integration skills using a recently developed capacity measure that incorporates speed and accuracy. Second, to investigate sensory factors contributing to integration ability, high and low-frequency hearing thresholds will be correlated with capacity, as well as gain measures from sentence recognition. Design: Integration scores were obtained from a within-subjects design using an open-set sentence speech recognition experiment and a closed set speeded-word classification experiment, designed to examine integration (i.e. capacity). Study sample: A sample of 44 adult listeners without a self-reported history of hearing-loss was recruited. Results: Results demonstrated a significant relationship between measures of audiovisual integration and hearing thresholds. Conclusions: Our data indicated that a listeners ability to integrate auditory and visual speech information in the domains of speed and accuracy is associated with auditory sensory capabilities and possibly other sensory and cognitive factors.


International Journal of Audiology | 2016

Normative data on audiovisual speech integration using sentence recognition and capacity measures

Nicholas Altieri; Daniel Hudock

Abstract Objective: The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design: The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study sample: We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions: Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.


Journal of Fluency Disorders | 2018

Phonological working memory in developmental stuttering: Potential insights from the neurobiology of language and cognition

Andrew L. Bowers; Lisa M. Bowers; Daniel Hudock; Heather L. Ramsdell-Hudock

The current review examines how neurobiological models of language and cognition could shed light on the role of phonological working memory (PWM) in developmental stuttering (DS). Toward that aim, we review Baddeleys influential multicomponent model of PWM and evidence for load-dependent differences between children and adults who stutter and typically fluent speakers in nonword repetition and dual-task paradigms. We suggest that, while nonword repetition and dual-task findings implicate processes related to PWM, it is unclear from behavioral studies alone what mechanisms are involved. To address how PWM could be related to speech output in DS, a third section reviews neurobiological models of language proposing that PWM is an emergent property of cyclic sensory and motor buffers in the dorsal stream critical for speech production. We propose that anomalous sensorimotor timing could potentially interrupt both fluent speech in DS and the emergent properties of PWM. To further address the role of attention and executive function in PWM and DS, we also review neurobiological models proposing that prefrontal cortex (PFC) and basal ganglia (BG) function to facilitate working memory under distracting conditions and neuroimaging evidence implicating the PFC and BG in stuttering. Finally, we argue that cognitive-behavioral differences in nonword repetition and dual-tasks are consistent with the involvement of neurocognitive networks related to executive function and sensorimotor integration in PWM. We suggest progress in understanding the relationship between stuttering and PWM may be accomplished using high-temporal resolution electromagnetic experimental approaches.


Speech Communication | 2015

The effect of single syllable silent reading and pantomime speech in varied syllable positions on stuttering frequency throughout utterance productions

Daniel Hudock; Nicholas Altieri; Lin Sun; Andrew L. Bowers; Christian Keil; Joseph Kalinowski

Strategies used on single syllables increases fluency throughout an utterance.Speech rehearsal on initial, middle, and final syllables decreases stuttering.Strategies increasing neuromotor planning demonstrate increased effectiveness.Strategies used during initiations are more effective than other positions. BackgroundStuttering is an overt speech disorder with the majority of disruptions occurring during phrase and sentence initiations. Recent theories and models of stuttering often describe deficits in neuromotor processes for planning motor speech movements, especially those involved during motor initiation. Interestingly, stuttering-like behaviors are reduced by approximately by nearly 100% during silent articulations (i.e., pantomime speech). If stuttering is primarily a deficit in neuromotor planning for speech actions, disfluent behaviors should be significantly reduced throughout an utterance when people who stutter employ silent reading or pantomime strategies on one syllable of an audibly produced utterance. Aims and scopeThe aim of the first study was to examine stuttering frequency during oral reading as participants who stutter produced the initial syllable under silent reading (SR), pantomime (P), and redacted (R) speech conditions. Similarly, the second study examined stuttering frequency during oral reading as a unique set of participants who stutter employed SR or P strategies on single syllable productions in initial, middle, and final syllable positions of an utterance. Methods and proceduresTwo unique sets of participants who stutter audibly read sentences under baseline and experimental conditions. Experimental conditions for the first study consisted of (1) SR, (2) P, and (3) R on the initial syllable of audibly produced utterances. Experimental conditions for the second study consisted of participants performing SR and P on single syllables in initial, middle, and final syllable positions throughout an audibly produced utterance. ResultsStuttering was significantly reduced during all experimental conditions in the first study. All experimental conditions differed from all others in the first study with P, SR, and R progressing from most to least effective. Results from the second study revealed differences from both SR and P to baseline and differences in both the initial and final syllable positions to baseline, but not the middle position to others, which approached statistical significance (p=0.1). In the second experiment, post hoc comparisons revealed that P in the initial position was the most effective and that P was significantly more effective than SR in both the initial and final positions, which supports the findings from the first study. Conclusions and implicationsResults from the current studies demonstrate a reduction of stuttering throughout an utterance when participants employed SR and P strategies on single syllables within an utterance. The greatest reduction in stuttering frequency occurred when silent motor plans were enacted and not just read or omitted (i.e., during P conditions). Supporting current feed-forward models of stuttering, strategies were most effective when employed in initial syllable positions. It can be hypothesized that behavioral strategies such as SR and P speech alter predictive neuromotor planning via feedback mechanisms or enhancing output gain of neuromotor planning regions.


International Journal of Language & Communication Disorders | 2010

Stuttered and fluent speakers' heart rate and skin conductance in response to fluent and stuttered speech

Jianliang Zhang; Joseph Kalinowski; Tim Saltuklaroglu; Daniel Hudock


Perspectives of the ASHA Special Interest Groups | 2018

A Mixed-Methods Observational Pilot Study of Student Clinicians Who Stutter

Daniel Hudock; Chad Yates; Linwood G. Vereen


International Journal for The Advancement of Counselling | 2018

The Phenomena of Collaborative Practice: the Impact of Interprofessional Education

Linwood G. Vereen; Chad Yates; Daniel Hudock; Nicole R. Hill; McKenzie Jemmett; Jody O’Donnell; Sarah Knudson

Collaboration


Dive into the Daniel Hudock's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Stuart

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chad Yates

Idaho State University

View shared research outputs
Top Co-Authors

Avatar

Linwood G. Vereen

Shippensburg University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge