Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elliot Saltzman is active.

Publication


Featured researches published by Elliot Saltzman.


Journal of Mathematical Psychology | 1979

Levels of sensorimotor representation

Elliot Saltzman

Abstract Problems of planning coordinated sensorimotor actions are examined within a cohesive framework based on seven levels of movement representation (conceptual, environmental spatial, effector, body spatial, joint motion, joint torque, and muscle) and the relationships existing among these levels. Emphasis is placed on discussions of the spatial motion, joint motion, and joint torque levels in the context of mathematical treatments of the degrees of freedom problem in motor control and the incorporation of environmentally supplied forces into movement planning procedures. Implications of this approach for understanding the nature of skilled actions are discussed, and applications of the mathematical treatments for simulation and experimental studies of coordinated movement are suggested.


The Journal of Neuroscience | 2007

Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions

Amir Lahav; Elliot Saltzman; Gottfried Schlaug

The discovery of audiovisual mirror neurons in monkeys gave rise to the hypothesis that premotor areas are inherently involved not only when observing actions but also when listening to action-related sound. However, the whole-brain functional formation underlying such “action–listening” is not fully understood. In addition, previous studies in humans have focused mostly on relatively simple and overexperienced everyday actions, such as hand clapping or door knocking. Here we used functional magnetic resonance imaging to ask whether the human action-recognition system responds to sounds found in a more complex sequence of newly acquired actions. To address this, we chose a piece of music as a model set of acoustically presentable actions and trained non-musicians to play it by ear. We then monitored brain activity in subjects while they listened to the newly acquired piece. Although subjects listened to the music without performing any movements, activation was found bilaterally in the frontoparietal motor-related network (including Brocas area, the premotor region, the intraparietal sulcus, and the inferior parietal region), consistent with neural circuits that have been associated with action observations, and may constitute the human mirror neuron system. Presentation of the practiced notes in a different order activated the network to a much lesser degree, whereas listening to an equally familiar but motorically unknown music did not activate this network. These findings support the hypothesis of a “hearing–doing” system that is highly dependent on the individuals motor repertoire, gets established rapidly, and consists of Brocas area as its hub.


Journal of Phonetics | 2003

The elastic phrase: modeling the dynamics of boundary-adjacent lengthening

Dani Byrd; Elliot Saltzman

This work examines the relation between phrasal structure and the control and coordination of articulation within a dynamical systems model of speech production. In this context, we review how speakers modulate the spatiotemporal organization of articulatorygestures as a function of their phrasal position. We present computational simulations that capture several important qualitative properties of these phrase boundaryeffects, such as prosodically -induced local slowing. This slowing is generated by dynamical effects on the activation timecourse of articulatory gestures and is controlled by prosodic gestures or p-gestures, which share much with the familiar dynamical description of constriction gestures. Prosodic gestures, however, function at boundaries purelyto temporallystretch or shrink gestural activation trajectories. This modulation of the ‘‘clock-rate’’ that controls the temporal unfolding of an utterance near junctures is such that the clock slows increasinglyas the boundaryis approached and speeds up again as the boundaryrecedes. Viewing phrase boundaries as warping the temporal fabric of an utterance represents a promising confluence of the fields of prosodyand of speech dy namics. r 2003 Elsevier Science Ltd. All rights reserved.


Language and Speech | 1993

Coordination and Coarticulation in Speech Production

Carol A. Fowler; Elliot Saltzman

In this article, we consider the concepts of coordination and coarticulation in speech production in the context of a task-dynamic model. Coordination reflects the transient establishment of constrained relationships among articulators that jointly produce linguistically significant actions of the vocal tract – that is phonetic gestures – in a flexible, context-sensitive manner. We ascribe the need for these constraints in part to the requirement of coarticulatory overlap in speech production. Coarticulation reflects temporally staggered activation of coordinative constraints for different phonetic gestures. We suggest that the anticipatory coarticulatory field for a gesture is more limited than look-ahead models have suggested, consistent with the idea that anticipatory coarticulation is the onset of activation of coordinative constraints for a forthcoming gesture. Finally, we ascribe much of the context-sensitivity in the anticipatory or carryover fields of a gesture (variation due to “coarticulation resistance”) to low-level (below the speech plan) interactions among the coordinative constraints for temporally overlapping gestures.


Journal of Experimental Psychology: Human Perception and Performance | 1991

Steady-state and Perturbed Rhythmical Movements: A Dynamical Analysis*

Bruce A. Kay; Elliot Saltzman; J. A. S. Kelso

This study examined rhythmic finger movements in the steady state and when momentarily perturbed in order to derive their qualitative dynamical properties. Movement frequency, amplitude, and peak velocity were stable under perturbation, signaling the presence of an attractor, and the topological dimensionality of that attractor was approximately equal to one. The strength of the attractor was constant with increasing movement frequency, and the Fourier spectra of the steady-state trials showed an alternating harmonic pattern. These results are consistent with a previously derived nonlinear oscillator model. However, the oscillation was phase advanced by perturbation overall, and a consistent phase-dependent, phase-shift pattern occurred, which is inconsistent with the model. The overall phase advance also shows that any central pattern generator responsible for generating the rhythm must be nontrivially modulated by the limb being controlled.


Human Movement Science | 2000

Task-dynamics of gestural timing: Phase windows and multifrequency rhythms

Elliot Saltzman; Dani Byrd

Abstract In this paper, we explore the hypothesis that intergestural phasing relations are implemented via coupling terms in a nonlinear dynamical systems model. Specifically, we describe recent computational developments of the task-dynamic model of gestural patterning (e.g., E. Saltzman, J. A. S. Kelso, Psychological Review 94 (1987) 84–106; E. Saltzman, K. G. Munhall, Ecological Psychology 1 (1989) 333–382) that are focused on modeling the timing of rhythmic action units. First, we explore the possibility of attractor states for intergestural phasing that are characterized as ranges or phase windows (D. Byrd, Phonology 13 (1996) 139–169), and contrast this behavior with standard models that display punctate relative phasing. It is argued that the phase window approach can provide flexible control of the relative timing of articulatory gestures, allowing constrained variability in intergestural timing as a function of linguistic and para-linguistic factors. Second, we discuss how this extension of the task-dynamic model has been adapted for modeling the production of multifrequency rhythms (speech or bimanual). This work explores the control of the frequency- and phase-locking characteristics of coupled limit cycle oscillators by examining how desired frequency ratio, intrinsic frequency detuning, and coupling asymmetries interact in creating observed rhythmic patterns. Using this method, details of the resultant transient and steady-state trajectories of phase and amplitude are generated that are not available using models derived with averaging techniques.


Journal of the Acoustical Society of America | 1996

Accurate recovery of articulator positions from acoustics: New conclusions based on human data

John Hogden; Anders Löfqvist; Vince Gracco; Igor Zlokarnik; Philip E. Rubin; Elliot Saltzman

Vocal tract models are often used to study the problem of mapping from the acoustic transfer function to the vocal tract area function (inverse mapping). Unfortunately, results based on vocal tract models are strongly affected by the assumptions underlying the models. In this study, the mapping from acoustics (digitized speech samples) to articulation (measurements of the positions of receiver coils placed on the tongue, jaw, and lips) is examined using human data from a single speaker: Simultaneous acoustic and articulator measurements made for vowel-to-vowel transitions, /g/ closures, and transitions into and out of /g/ closures. Articulator positions were measured using an EMMA system to track coils placed on the lips, jaw, and tongue. Using these data, look-up tables were created that allow articulator positions to be estimated from acoustic signals. On a data set not used for making look-up tables, correlations between estimated and actual coil positions of around 94% and root-mean-squared errors around 2 mm are common for coils on the tongue. An error source evaluation shows that estimating articulator positions from quantized acoustics gives root-mean-squared errors that are typically less than 1 mm greater than the errors that would be obtained from quantizing the articulator positions themselves. This study agrees with and extends previous studies of human data by showing that for the data studied, speech acoustics can be used to accurately recover articulator positions.


Journal of Motor Behavior | 1992

Skill Acquisition and Development: The Roles of State-, Parameter-, and Graph-Dynamics

Elliot Saltzman; Kevin G. Munhall

The development of motor skills can be portrayed as a dynamical process that involves three types of dynamics: state dynamics, parameter dynamics, and graph dynamics. The time scales associated with each type of dynamics are discussed, and an outline is provided of the role played by each type in the developing organism. In particular, the role of parameter dynamics and graph dynamics in producing qualitative, bifurcational changes in behavior is described. It is concluded that all three types of dynamics are required for a complete description of skill acquisition and development.


Journal of the Acoustical Society of America | 2004

TADA: An enhanced, portable Task Dynamics model in MATLAB

Hosung Nam; Louis Goldstein; Elliot Saltzman; Dani Byrd

A portable computational system called TADA was developed for the Task Dynamic model of speech motor control [Saltzman and Munhall, Ecol. Psychol. 1, 333–382 (1989)]. The model maps from a set of linguistic gestures, specified as activation functions with corresponding constriction goal parameters, to time functions for a set of model articulators. The original Task Dynamic code was ported to the (relatively) platform‐independent MATLAB environment and includes a MATLAB version of the Haskins articulatory synthesizer, so that articulator motions computed by the Task Dynamic model can be used to generate sound. Gestural scores can now be edited graphically and the effects of gestural score changes on the models output evaluated. Other new features of the system include: (1) A graphical user interface that displays the input gestural scores, output time functions of constriction goal variables and articulators, and an animation of the resulting vocal‐tract motion; (2) Integration of the Task Dynamic model w...


Ecological Psychology | 2013

A Tutorial on Multifractality, Cascades, and Interactivity for Empirical Time Series in Ecological Science

Damian G. Kelty-Stephen; Kinga Palatinus; Elliot Saltzman; James A. Dixon

Interactivity is a central theme of ecological psychology. According to Gibsonian views, behavior is the emergent property of interactions between organism and environment. Hence, an important challenge for ecological psychology has been to identify physical principles that provide an empirical window into interactivity. We suspect that multifractality, a concept from statistical physics, may be helpful in this regard, and we offer this article as a tutorial on multifractality with 2 main goals. First, we aim to describe multifractality with a series of simple, concrete, but progressively more elaborate examples that will incrementally elucidate the relationship between multifractality and interactivity. Second, we aim to describe a direct estimation method for computing the multifractal spectrum (e.g., Chhabra & Jensen, 1989), presenting it as an alternative that avoids the pitfalls of more popular methods and that may address more appropriately the measurements traditionally taken by ecological psychologists. In sum, this tutorial aims to unpack the theoretical background for an analytical method allowing rigorous test of interactivity in a variety of empirical settings.

Collaboration


Dive into the Elliot Saltzman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louis Goldstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dani Byrd

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. A. S. Kelso

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Tiede

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge