Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ludo Max is active.

Publication


Featured researches published by Ludo Max.


Journal of Fluency Disorders | 2010

The role of motor learning in stuttering adaptation: Repeated versus novel utterances in a practice–retention paradigm

Ludo Max; Caitlin J. Baldwin

UNLABELLED Most individuals who stutter become more fluent during repeated oral readings of the same material. This adaptation effect may reflect motor learning associated with repeated practice of speech motor sequences. We tested this hypothesis with a paradigm that used two integrated approaches to identify the role of motor learning in stuttering adaptation: to distinguish practice effects from situation effects, the texts contained both repeated and novel sentences; to differentiate learning effects from temporary performance effects, stuttering frequency was determined for both the initial adaptation readings and retention tests after 2h and 24h. Average group data for 7 stuttering individuals who showed adaptation indicate that (a) both repeated and novel sentences resulted in decreased stuttering frequency across five readings in the initial session, but the decrease was larger for repeated than for novel sentences; (b) after 2h, stuttering frequency for both types of sentences was again similar, but with additional readings the repeated sentences once again showed larger improvements in fluency; (c) after 24h, prior fluency improvements for the novel sentences had dissipated whereas retention was observed for the repeated sentences. These findings - supporting the hypothesis that motor learning plays a role in stuttering adaptation - were representative for most, but not all, individual subjects. Subjects whose data did not follow the group trend and showed comparable retention for repeated and novel sentences may adapt primarily on the basis of non-motor mechanisms. Alternatively, those subjects may in fact show more substantial generalization of motor learning effects to previously unpracticed movement sequences. EDUCATIONAL OBJECTIVES After reading this article, the reader will be able to: (1) summarize previous research on stuttering adaptation; (2) define motor learning and describe its essential characteristics; and (3) discuss why the results from this and previous studies suggest that stuttering adaptation may be a result of motor learning.


Journal of Speech Language and Hearing Research | 2014

Control and Prediction Components of Movement Planning in Stuttering Versus Nonstuttering Adults

Ayoub Daliri; Roman A. Prokopenko; J. Randall Flanagan; Ludo Max

PURPOSE Stuttering individuals show speech and nonspeech sensorimotor deficiencies. To perform accurate movements, the sensorimotor system needs to generate appropriate control signals and correctly predict their sensory consequences. Using a reaching task, we examined the integrity of these control and prediction components separately for movements unrelated to the speech motor system. METHOD Nine stuttering and 9 nonstuttering adults made fast reaching movements to visual targets while sliding an object under the index finger. To quantify control, we determined initial direction error and end point error. To quantify prediction, we calculated the correlation between vertical and horizontal forces applied to the object-an index of how well vertical force (preventing slip) anticipated direction-dependent variations in horizontal force (moving the object). RESULTS Directional and end point error were significantly larger for the stuttering group. Both groups performed similarly in scaling vertical force with horizontal force. CONCLUSIONS The stuttering groups reduced reaching accuracy suggests limitations in generating control signals for voluntary movements, even for nonorofacial effectors. Typical scaling of vertical force with horizontal force suggests an intact ability to predict the consequences of planned control signals. Stuttering may be associated with generalized deficiencies in planning control signals rather than predicting the consequences of those signals.


Neuroscience Letters | 2015

Feedback delays eliminate auditory-motor learning in speech production

Ludo Max; Derek G. Maffett

Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to ones own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders.


Journal of Speech Language and Hearing Research | 2014

Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking During Speech and Nonspeech Motor Tasks

Yongqiang Feng; Ludo Max

PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.


Frontiers in Human Neuroscience | 2016

Modulation of Auditory Responses to Speech vs. Nonspeech Stimuli during Speech Movement Planning

Ayoub Daliri; Ludo Max

Previously, we showed that the N100 amplitude in long latency auditory evoked potentials (LLAEPs) elicited by pure tone probe stimuli is modulated when the stimuli are delivered during speech movement planning as compared with no-speaking control conditions. Given that we probed the auditory system only with pure tones, it remained unknown whether the nature and magnitude of this pre-speech auditory modulation depends on the type of auditory stimulus. Thus, here, we asked whether the effect of speech movement planning on auditory processing varies depending on the type of auditory stimulus. In an experiment with nine adult subjects, we recorded LLAEPs that were elicited by either pure tones or speech syllables when these stimuli were presented prior to speech onset in a delayed-response speaking condition vs. a silent reading control condition. Results showed no statistically significant difference in pre-speech modulation of the N100 amplitude (early stages of auditory processing) for the speech stimuli as compared with the nonspeech stimuli. However, the amplitude of the P200 component (later stages of auditory processing) showed a statistically significant pre-speech modulation that was specific to the speech stimuli only. Hence, the overall results from this study indicate that, immediately prior to speech onset, modulation of the auditory system has a general effect on early processing stages but a speech-specific effect on later processing stages. This finding is consistent with the hypothesis that pre-speech auditory modulation may play a role in priming the auditory system for its role in monitoring auditory feedback during speech production.


Speech Communication | 2011

Detecting anticipatory effects in speech articulation by means of spectral coefficient analyses

Yongqiang Feng; Grace Hao; Steve An Xue; Ludo Max

Few acoustic studies have attempted to examine anticipatory effects in the earliest part of the release of stop consonants. We investigated the ability of spectral coefficients to reveal anticipatory coarticulation in the burst and early aspiration of stops in monosyllables. Twenty American English speakers produced stop (/k,t,p/) - vowel (/æ,i,o/) - stop (/k,t,p/) sequences in two phrase positions. The first four spectral coefficients (mean, standard deviation, skewness, kurtosis) were calculated for one window centered on the burst of the onset consonant and two subsequent, non-overlapping windows. All coefficients showed an influence of vowel-to-consonant anticipatory coarticulation. Which onset consonant showed the strongest vowel effect depended on the specific coefficient under consideration. A context-dependent consonant-to-consonant anticipatory effect was observed for onset /p/. Findings demonstrate that spectral coefficients can reveal subtle anticipatory adjustments as early as the burst of stop consonants. Different results for the four coefficients suggest that comprehensive spectral analyses offer advantages over other approaches. Studies using these techniques may expose previously unobserved articulatory adjustments among phonetic contexts or speaker populations.


Language and Speech | 2018

Spectral Coefficient Analyses of Word-Initial Stop Consonant Productions Suggest Similar Anticipatory Coarticulation for Stuttering and Nonstuttering Adults:

Santosh Maruthy; Yongqiang Feng; Ludo Max

A longstanding hypothesis about the sensorimotor mechanisms underlying stuttering suggests that stuttered speech dysfluencies result from a lack of coarticulation. Formant-based measures of either the stuttered or fluent speech of children and adults who stutter have generally failed to obtain compelling evidence in support of the hypothesis that these individuals differ in the timing or degree of coarticulation. Here, we used a sensitive acoustic technique–spectral coefficient analyses–that allowed us to compare stuttering and nonstuttering speakers with regard to vowel-dependent anticipatory influences as early as the onset burst of a preceding voiceless stop consonant. Eight adults who stutter and eight matched adults who do not stutter produced C1VC2 words, and the first four spectral coefficients were calculated for one analysis window centered on the burst of C1 and two subsequent windows covering the beginning of the aspiration phase. Findings confirmed that the combined use of four spectral coefficients is an effective method for detecting the anticipatory influence of a vowel on the initial burst of a preceding voiceless stop consonant. However, the observed patterns of anticipatory coarticulation showed no statistically significant differences, or trends toward such differences, between the stuttering and nonstuttering groups. Combining the present results for fluent speech in one given phonetic context with prior findings from both stuttered and fluent speech in a variety of other contexts, we conclude that there is currently no support for the hypothesis that the fluent speech of individuals who stutter is characterized by limited coarticulation.


Language, cognition and neuroscience | 2018

Adaptation in Mandarin tone production with pitch-shifted auditory feedback: influence of tonal contrast requirements

Yongqiang Feng; Yan Xiao; Yonghong Yan; Ludo Max

ABSTRACT We investigated Mandarin speakers’ control of lexical tone production with F0-perturbed auditory feedback. Subjects produced high level (T1), mid rising (T2), low dipping (T3), and high falling (T4) tones in conditions with (a) no perturbation, (b) T1 shifted down, (c) T1 shifted down and T3 shifted up, or (d) T1 shifted down and T3 shifted up but without producing other tones. Speakers and new subjects also completed a tone identification task with unaltered and F0-perturbed productions. With only T1 perturbed down, speakers adapted by raising F0 relative to no-perturbation. With simultaneous T1 down and T3 up perturbations, no T1 adaptation occurred, and T3 adaptation occurred only if T2 was also produced. Identification accuracy with stimuli representing adapted productions was comparable to baseline, but with simulated non-adapted productions it was reduced for T2 and T3. Thus, Mandarin speakers’ adaptation to F0 perturbations is linguistically constrained and serves to maintain tone contrast.


Brain and Language | 2013

Computational modeling of stuttering caused by impairments in a basal ganglia thalamo-cortical circuit involved in syllable selection and initiation

Oren Civier; Daniel Bullock; Ludo Max; Frank H. Guenther


Brain and Language | 2015

Modulation of auditory processing during speech movement planning is limited in adults who stutter.

Ayoub Daliri; Ludo Max

Collaboration


Dive into the Ludo Max's collaboration.

Top Co-Authors

Avatar

Ayoub Daliri

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Yongqiang Feng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Xiao

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Yonghong Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge