Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanner Sorensen is active.

Publication


Featured researches published by Tanner Sorensen.


arXiv: Methodology | 2016

Bayesian linear mixed models using Stan: A tutorial for psychologists, linguists, and cognitive scientists

Tanner Sorensen; Sven Hohenstein; Shravan Vasishth

With the arrival of the R packages nlme and lme4, linear mixed models (LMMs) have come to be widely used in experimentally-driven areas like psychology, linguistics, and cognitive science. This tutorial provides a practical introduction to fitting LMMs in a Bayesian framework using the probabilistic programming language Stan. We choose Stan (rather than WinBUGS or JAGS) because it provides an elegant and scalable framework for fitting models in most of the standard applications of LMMs. We ease the reader into fitting increasingly complex LMMs, first using a two-condition repeated measures self-paced reading study, followed by a more complex


Journal of the Acoustical Society of America | 2017

Test–retest repeatability of human speech biomarkers from static and real-time dynamic magnetic resonance imaging

Johannes Töger; Tanner Sorensen; Krishna Somandepalli; Asterios Toutios; Sajan Goud Lingala; Shrikanth Narayanan; Krishna S. Nayak

2\times 2


Ecological Psychology | 2016

The Gesture as an Autonomous Nonlinear Dynamical System

Tanner Sorensen; Adamantios I. Gafos

repeated measures factorial design that can be generalized to much more complex designs.


Journal of the Acoustical Society of America | 2017

Tracking developmental changes in articulatory strategy during childhood

Tanner Sorensen; Asterios Toutios; Louis Goldstein; Shrikanth Narayanan

Static anatomical and real-time dynamic magnetic resonance imaging (RT-MRI) of the upper airway is a valuable method for studying speech production in research and clinical settings. The test-retest repeatability of quantitative imaging biomarkers is an important parameter, since it limits the effect sizes and intragroup differences that can be studied. Therefore, this study aims to present a framework for determining the test-retest repeatability of quantitative speech biomarkers from static MRI and RT-MRI, and apply the framework to healthy volunteers. Subjects (n = 8, 4 females, 4 males) are imaged in two scans on the same day, including static images and dynamic RT-MRI of speech tasks. The inter-study agreement is quantified using intraclass correlation coefficient (ICC) and mean within-subject standard deviation (σe). Inter-study agreement is strong to very strong for static measures (ICC: min/median/max 0.71/0.89/0.98, σe: 0.90/2.20/6.72 mm), poor to strong for dynamic RT-MRI measures of articulator motion range (ICC: 0.26/0.75/0.90, σe: 1.6/2.5/3.6 mm), and poor to very strong for velocities (ICC: 0.21/0.56/0.93, σe: 2.2/4.4/16.7 cm/s). In conclusion, this study characterizes repeatability of static and dynamic MRI-derived speech biomarkers using state-of-the-art imaging. The introduced framework can be used to guide future development of speech biomarkers. Test-retest MRI data are provided free for research use.


conference of the international speech communication association | 2016

Articulatory Synthesis Based on Real-Time Magnetic Resonance Imaging Data.

Asterios Toutios; Tanner Sorensen; Krishna Somandepalli; Rachel Alexander; Shrikanth Narayanan

ABSTRACT We propose a theory of how the speech gesture determines change in a functionally relevant variable of vocal tract state (e.g., constriction degree). A core postulate of the theory is that the gesture determines how the variable evolves in time independent of any executive timekeeper. That is, the theory involves intrinsic timing of speech gestures. We compare the theory against others in which an executive timekeeper determines change in vocal tract state. Theories that employ an executive timekeeper have been proposed to correct for disparities between theoretically predicted and experimentally observed velocity profiles. Such theories of extrinsic timing make the gesture a nonautonomous dynamical system. For a nonautonomous dynamical system, the change in state depends not just on the state but also on time. We show that this nonautonomous extension makes surprisingly weak kinematic predictions both qualitatively and quantitatively. We propose instead that the gesture is a theoretically simpler nonlinear autonomous dynamical system. For the proposed nonlinear autonomous dynamical system, the change in state depends nonlinearly on the state and does not depend on time. This new theory provides formal expression to the notion of intrinsic timing. Furthermore, it predicts experimentally observed relations among kinematic variables.


conference of the international speech communication association | 2016

Characterizing Vocal Tract Dynamics Across Speakers Using Real-Time MRI.

Tanner Sorensen; Asterios Toutios; Louis Goldstein; Shrikanth Narayanan

During development, children learn how to coordinate movements of the speech articulators in order to optimally achieve motor goals. It has been shown that variability in these coordinative patterns, or articulatory strategies, decreases over the course of childhood before ultimately stabilizing at adult-like levels. For example, the jaw becomes more tightly coordinated with the tongue and lips. Recent advances in real-time magnetic resonance imaging (rt-MRI) and analysis provide a means to characterize such articulatory strategies by quantifying how much the jaw, tongue, lips, velum, and pharynx contribute to constrictions of the vocal tract during speech. The articulators are segmented in reconstructed rt-MRI and constriction degrees are measured as the linear distance between opposing structures (e.g., tongue and palate). Change in constriction degree over time is decomposed into articulator contributions to characterize articulatory strategy. In this pilot study, we obtain quantitative biomarkers of articulatory strategies from a 10-year-old participant and compare them against those of 8 healthy adult participants. The study quantifies the difference between child and adult articulatory strategies in terms of how much each articulator contributes to constrictions of the vocal tract during speech and indicates how the articulator movements are coordinated with each other in time. During development, children learn how to coordinate movements of the speech articulators in order to optimally achieve motor goals. It has been shown that variability in these coordinative patterns, or articulatory strategies, decreases over the course of childhood before ultimately stabilizing at adult-like levels. For example, the jaw becomes more tightly coordinated with the tongue and lips. Recent advances in real-time magnetic resonance imaging (rt-MRI) and analysis provide a means to characterize such articulatory strategies by quantifying how much the jaw, tongue, lips, velum, and pharynx contribute to constrictions of the vocal tract during speech. The articulators are segmented in reconstructed rt-MRI and constriction degrees are measured as the linear distance between opposing structures (e.g., tongue and palate). Change in constriction degree over time is decomposed into articulator contributions to characterize articulatory strategy. In this pilot study, we obtain quantitative biomarkers of a...


Machine Translation | 2018

The ELISA Situation Frame extraction for low resource languages pipeline for LoReHLT’2016

Nikolaos Malandrakis; Anil Ramakrishna; Victor R. Martinez; Tanner Sorensen; Dogan Can; Shrikanth Narayanan

This paper presents a methodology for articulatory synthesis of running speech in American English driven by real-time magnetic resonance imaging (rtMRI) mid-sagittal vocal-tract data. At the core of the methodology is a time-domain simulation of the propagation of sound in the vocal tract developed previously by Maeda. The first step of the methodology is the automatic derivation of air-tissue boundaries from the rtMRI data. These articulatory outlines are then modified in a systematic way in order to introduce additional precision in the formation of consonantal vocal-tract constrictions. Other elements of the methodology include a previously reported set of empirical rules for setting the time-varying characteristics of the glottis and the velopharyngeal port, and a revised sagittal-to-area conversion. Results are promising towards the development of a full-fledged text-to-speech synthesis system leveraging directly observed vocal-tract dynamics.


Journal of the Acoustical Society of America | 2017

Indexing tongue profile narrowing for English lateral consonants using 3D volumetric MR imaging

Mairym Llorens; Dani Byrd; Nancy Vazquez; Louis Goldstein; Tanner Sorensen; Asterios Toutios; Shrikanth Narayanan

Real-time magnetic resonance imaging (rtMRI) provides information about the dynamic shaping of the vocal tract during speech production and valuable data for creating and testing models of speech production. In this paper, we use rtMRI videos to develop a dynamical system in the framework of Task Dynamics which controls vocal tract constrictions and induces deformation of the air-tissue boundary. This is the first task dynamical system explicitly derived from speech kinematic data. Simulation identifies differences in articulatory strategy across speakers (n = 18), specifically in the relative contribution of articulators to vocal tract constrictions.


conference of the international speech communication association | 2017

Database of Volumetric and Real-Time Vocal Tract MRI for Speech Science.

Tanner Sorensen; Zisis Iason Skordilis; Asterios Toutios; Yoon-Chul Kim; Yinghua Zhu; Jangwon Kim; Adam C. Lammert; Vikram Ramanarayanan; Louis Goldstein; Dani Byrd; Krishna S. Nayak; Shrikanth Narayanan

This paper describes the Situation Frame extraction pipeline developed by team ELISA as a part of the DARPA Low Resource Languages for Emergent Incidents program. Situation Frames are structures describing humanitarian needs, including the type of need and the location affected by it. Situation Frames need to be extracted from text or speech audio in a low resource scenario where little data, including no annotated data, are available for the target language. Our Situation Frame pipeline is the final step of the overall ELISA processing pipeline and accepts as inputs the outputs of the ELISA machine translation and named entity recognition components. The inputs are processed by a combination of neural networks to detect the types of needs mentioned in each document and a second post-processing step connects needs to locations. The resulting Situation Frame system was used during the first yearly evaluation on extracting Situation Frames from text, producing encouraging results and was later successfully adapted to the speech audio version of the same task.


conference of the international speech communication association | 2017

VCV Synthesis Using Task Dynamics to Animate a Factor-Based Articulatory Model.

Rachel Alexander; Tanner Sorensen; Asterios Toutios; Shrikanth Narayanan

Production of lateral consonants in many languages involves separate but coordinated tongue tip and tongue rear actions—raising of the tongue tip and retraction of the tongue body. Given these gestures and the presence of lateral airflow, it has been speculated that horizontal (i.e., side-to-side) narrowing of the tongue may occur during production of these laterals, either as a passive result of the anterior-posterior lingual stretching or as an actively controlled movement. This study uses 3D volumetric MR scans of speakers producing a variety of sustained English sounds to examine horizontal tongue width as a function of consonant laterality/centrality and as a function of vowel height in front vowel contexts. Multiplanar reconstruction of volumetric data for each token and subject permitted imaging of the static postures on the axial plane. We determine a protocol for identifying a specific oblique axial slice in terms of mid-sagittal anatomical landmarks that allows for a stable and informative index...

Collaboration


Dive into the Tanner Sorensen's collaboration.

Top Co-Authors

Avatar

Shrikanth Narayanan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Asterios Toutios

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Louis Goldstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dani Byrd

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Krishna S. Nayak

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Krishna Somandepalli

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam C. Lammert

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Anil Ramakrishna

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge