Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennell Vick is active.

Publication


Featured researches published by Jennell Vick.


Ear and Hearing | 2001

Language-specific, hearing-related changes in vowel spaces: a preliminary study of English- and Spanish-speaking cochlear implant users.

Joseph S. Perkell; William A. Numa; Jennell Vick; Harlan Lane; Thomas J. Balkany; John Gould

Objective This study investigates the role of hearing in vowel productions of postlingually deafened cochlear implant users. Two hypotheses are tested that derive from the view that vowel production is influenced by competing demands of intelligibility for the listener and least effort in the speaker: 1) Hearing enables a cochlear implant user to produce vowels distinctly from one another; without hearing, the speaker may give more weight to economy of effort, leading to reduced vowel separation. 2) Speakers may need to produce vowels more distinctly from one another in a language with a relatively “crowded” vowel space, such as American English, than in a language with relatively few vowels, such as Spanish. Thus, when switching between hearing and non-hearing states, English speakers may show a tradeoff between vowel distinctiveness and least effort, whereas Spanish speakers may not. Design To test the prediction that there will be a reduction of average vowel spacing (AVS) (average intervowel distance in the F1–F2 plane) with interrupted hearing for English-speaking cochlear implant users, but no systematic change in AVS for Spanish cochlear implant users, vowel productions of seven English-speaking and seven Spanish-speaking cochlear implant users, who had been using their implants for at least 1 yr, were recorded when their implant speech processors were turned off and on several times in two sessions. Results AVS was consistently larger for the English speakers with hearing than without hearing. The magnitude and direction of AVS change was more variable for the Spanish speakers, both within and between subjects. Conclusion Vowel distinctiveness was enhanced with the provision of some hearing in the language group with a more crowded vowel space but not in the language group with fewer vowels. The view that speakers seek to minimize effort while maintaining the distinctiveness of acoustic goals receives some support.


Ear and Hearing | 2001

Changes in speech intelligibility of postlingually deaf adults after cochlear implantation.

John Gould; Harlan Lane; Jennell Vick; Joseph S. Perkell; Melanie L. Matthies; Majid Zandipour

Objective This study examines changes in the intelligibility of CVC words spoken by postlingually deafened adults after they have had 6 to 12 mo of experience with a cochlear implant. The hypothesis guiding the research is that the intelligibility of these speakers will improve after extended use of a cochlear implant. The paper also describes changes in CVC word intelligibility analyzed by phoneme class and by features. Design The speech of eight postlingually deaf adults was recorded before activation of the speech processors of their cochlear implants and at 6 mo and 1 yr after activation. Seventeen listeners with no known impairment of hearing completed a word identification task while listening to each implant user’s speech in noise. The percent information transmitted by the speakers in their pre- and postactivation recordings was measured for 11 English consonants and eight vowels separately. Results An overall improvement in word intelligibility was observed: seven of the eight speakers showed improvement in vowel intelligibility and six speakers showed improvement in consonant intelligibility. However, the intelligibility of specific consonant and vowel features varied greatly across speakers. Conclusions Extended use of a cochlear implant by postlingually deafened adults tends to enhance their intelligibility.


IEEE Transactions on Visualization and Computer Graphics | 2013

Physics-Based Deformable Tongue Visualization

Yin Yang; Xiaohu Guo; Jennell Vick; Luis G. Torres; Thomas F. Campbell

In this paper, a physics-based framework is presented to visualize the human tongue deformation. The tongue is modeled with the Finite Element Method (FEM) and driven by the motion capture data gathered during speech production. Several novel deformation visualization techniques are presented for in-depth data analysis and exploration. To reveal the hidden semantic information of the tongue deformation, we present a novel physics-based volume segmentation algorithm. This is accomplished by decomposing the tongue model into segments based on its deformation pattern with the computation of deformation subspaces and fitting the target deformation locally at each segment. In addition, the strain energy is utilized to provide an intuitive low-dimensional visualization for the high-dimensional sequential motion. Energy-interpolation-based morphing is also equipped to effectively highlight the subtle differences of the 3D deformed shapes without any visual occlusion. Our experimental results and analysis demonstrate the effectiveness of this framework. The proposed methods, though originally designed for the exploration of the tongue deformation, are also valid for general deformation analysis of other shapes.


Journal of Speech Language and Hearing Research | 2014

Data-Driven Subclassification of Speech Sound Disorders in Preschool Children

Jennell Vick; Thomas F. Campbell; Lawrence D. Shriberg; Jordan R. Green; Klaus Truemper; Heather Leavy Rusiewicz; Christopher A. Moore

PURPOSE The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. METHOD Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. RESULTS Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%-84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%-16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). CONCLUSION Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder-not otherwise specified. The authors identified candidate measures to identify children in each of these groups.


Journal of Neurophysiology | 2012

Distinct developmental profiles in typical speech acquisition

Jennell Vick; Thomas F. Campbell; Lawrence D. Shriberg; Jordan R. Green; Hervé Abdi; Heather Leavy Rusiewicz; Lakshmi Venkatesh; Christopher A. Moore

Three- to five-year-old children produce speech that is characterized by a high level of variability within and across individuals. This variability, which is manifest in speech movements, acoustics, and overt behaviors, can be input to subgroup discovery methods to identify cohesive subgroups of speakers or to reveal distinct developmental pathways or profiles. This investigation characterized three distinct groups of typically developing children and provided normative benchmarks for speech development. These speech development profiles, identified among 63 typically developing preschool-aged speakers (ages 36-59 mo), were derived from the childrens performance on multiple measures. These profiles were obtained by submitting to a k-means cluster analysis of 72 measures that composed three levels of speech analysis: behavioral (e.g., task accuracy, percentage of consonants correct), acoustic (e.g., syllable duration, syllable stress), and kinematic (e.g., variability of movements of the upper lip, lower lip, and jaw). Two of the discovered group profiles were distinguished by measures of variability but not by phonemic accuracy; the third group of children was characterized by their relatively low phonemic accuracy but not by an increase in measures of variability. Analyses revealed that of the original 72 measures, 8 key measures were sufficient to best distinguish the 3 profile groups.


Journal of the Acoustical Society of America | 2017

Measuring progress during practice: Motion analysis throughout visual biofeedback treatment for residual speech sound errors

Rebecca Mental; Holle Carey; Gregory S. Lee; Michael J. Hodge; Jennell Vick

The question of why some individuals make progress in speech therapy while others do not remains largely unanswered. Treatment delivery method could be a factor; individuals whose speech sounds have not improved through traditional therapy may be more responsive to alternative forms of treatment, such as visual biofeedback. The present study utilized visual biofeedback in the form of Opti-Speech, which uses real-time, three-dimensional streaming data from the Wave EMA system to create an avatar of a participant’s tongue. Participants included two adult females with residual /r/ errors. One participant demonstrated marked improvement during and after treatment, while the other exhibited little perceptual change. It is possible that a more flexible motor system (i.e., one that shows more variability as a new skill is being learned) is more conducive to the acquisition of new speech sound movements than a more rigid system. Kinematic data were analyzed from each session, including duration, maximum displacem...


Journal of the Acoustical Society of America | 2017

Seeing is treating: 3D electromagnetic midsagittal articulography (EMA) visual biofeedback for the remediation of residual speech errors

Jennell Vick; Rebecca Mental; Holle Carey; Gregory S. Lee

Production distortions or errors on the sounds /s/ and /r/ are among the most resistant to remediation with traditional speech therapies, even after years of weekly treatment sessions (e.g., Gibbon et al., 1996; McAuliffe & Cornwell, 2008; McLeod, Roberts, & Sita, 2006). In this study, we report on the results of treating residual speech errors in older children and adults with a new visual biofeedback treatment called Opti-Speech. Opti-Speech uses streaming positional data from the Wave EMA device to animate real-time motion of a tongue avatar on a screen. Both the clinician and the client can visualize movements as they occur relative to target shapes, set by the clinician, intended to guide the client to produce distortion-free and accurate speech sounds. Analyses of positional data and associated kinematics were completed during baseline, treatment, and follow-up phases for four participants, two who produced pre-treatment residual errors on /s/, and two with residual errors on /r/. Measures included ...


Journal of the Acoustical Society of America | 2016

Plasticity of the internal model for speech sounds: Articulatory changes during intensive visual biofeedback treatment

Jennell Vick; Rebecca Mental; Holle Carey; Nolan Schreiber; Andrew Barnes; Gregory S. Lee

Visual biofeedback, commonly delivered by a display of ultrasound of tongue position, has demonstrated effectiveness for treating residual speech errors in older children and adults. It can be challenging to make kinematic measures during treatment without the use of a head stabilizing system, however, and it is likewise not possible to measure the changes in speech motor control that accompany improvements in speech sound production. Opti-Speech, a visual biofeedback treatment software that uses EMA to provide positional data, has the benefit of providing a steady stream of position data throughout baseline and treatment sessions. In this study, Opti-Speech treatment was provided with an intensive schedule (2x/day for 5 days) to two adolescent males with persistent speech errors (i.e., lateralized /s/). Marked improvements in /s/ accuracy and quality were noted over the course of treatment and are reported in another paper (Mental et al., this meeting). Kinematic measures were made of tongue position thr...


Journal of the Acoustical Society of America | 2016

Changes in lateralized /s/ production after treatment with opti-speech visual biofeedback

Rebecca Mental; Holle Carey; Nolan Schreiber; Andrew Barnes; Gregory S. Lee; Jennell Vick

Opti-Speech is a visual biofeedback software for the treatment of speech sound placement errors. It utilizes Northern Digital Wave platform for electromagnetic articulography. Opti-Speech tracks the movement of five sensors on a talker’s tongue to animate a 3D tongue avatar that moves in real time with the talker’s own tongue. The avatar is viewable from multiple angles, and virtual targets can be created to guide the participant’s speech movements. Opti-Speech has found initial success in small feasibility studies (Katz and Mehta, 2015; Vick et al., 2016). The two participants described in this study are part of a larger scale clinical trial to evaluate efficacy. Both adolescent males presented with the lateralized /s/ speech error. Their intensive biofeedback treatment schedule included two one-hour sessions per day over five sequential days. Data analyses include acoustic measures (spectral mean and kurtosis of the /s/) and trained perceptual judgments of speech sound accuracy on a 16 item probe list f...


Journal of the Acoustical Society of America | 2016

MATLAB visualization program for analysis of C3D motion capture data of speech and language

Andrew Barnes; Rebecca Mental; Brooke Macnamara; Jennell Vick

Investigation of the underlying physiology of speech and language motor control often includes analyses of kinematic parameters derived from fleshpoint tracking. A MATLAB program has been developed to visualize, boundary mark, and analyze aspects of motion capture data in a C3D file format. C3D is the biomechanics data standard for binary 3D data that is widely used in many industries and motion research. Included in the format is marker names, arrays of position data, and analog inputs. While this format is highly versatile, it is difficult to assess the comparison of the markers positions without a visualization program. The developed program allows for flexible comparison of any fleshpoint markers, in any set of repetitions. It also allows for easy visualization of the differences between varying data conditions. Data analysis is done on-the-fly for information such as joint angles through three markers, distance between any two markers, and variability analyses of any selected data. Data are output in...

Collaboration


Dive into the Jennell Vick's collaboration.

Top Co-Authors

Avatar

Joseph S. Perkell

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Majid Zandipour

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Melanie L. Matthies

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Tiede

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harlan Lane

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harlan Lane

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Ellen Stockmann

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Margaret Denny

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge