Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Randolph is active.

Publication


Featured researches published by Mark A. Randolph.


Journal of the Acoustical Society of America | 1992

Effects of postvocalic voicing on the time course of vowels and diphthongs

Jan P. H. van Santen; John Coleman; Mark A. Randolph

Studies of the effects of contextual factors on segmental duration have been mostly restricted to overall duration and relatively few have paid attention to the internal time course of speech segments. However, fine‐grained temporal analysis of speech segments is hindered by within‐speaker variability and the unreliability of manual segmentation. In this paper, using new methods for reducing statistical ‘‘noise’’ in acoustic trajectories [mappings of the time domain into an appropriate acoustic parameter space, such as formants, cepstra, etc.], analyses are presented of the effects of postvocalic voicing on the internal time course of selected vowels and diphthongs preceding voiced and voiceless consonants in contrasting minimal word pairs [e.g., ‘‘mend’’ versus ‘‘meant’’] are presented. These methods include the computation of the centroid of a set of acoustic trajectories, the average of a set of time warps among minimal word pairs, and various smoothing techniques. The centroid trajectories of a minima...


international conference on acoustics, speech, and signal processing | 2000

A hidden Markov model based visual speech synthesizer

Jay J. Williams; Aggelos K. Katsaggelos; Mark A. Randolph

This paper describes a hidden Markov model (HMM) based visual synthesizer designed to assist persons with impaired hearing. This synthesizer builds on results in the area of audio-visual speech recognition. We describe how a correlation HMM can be used to integrate independent acoustic and visual HMMs for speech-to-visual synthesis. Our results show that an HMM correlating model can significantly improve synchronization errors versus techniques which compensate for rate differences through scaling.


Journal of the Acoustical Society of America | 1994

Speech analysis based on a model of articulatory behavior

Mark A. Randolph

A major problem in articulatory‐based speech recognition is the need for the recognizer to transform an input sequence of acoustic observations into a set of parameters describing the behavior of the articulatory system during utterance production. This acoustic‐to‐articulatory transformation is nonunique. In addition, an economical computational framework is needed to describe and recognize patterns of articulatory behavior obtained from acoustic analysis. In this paper, a speech analysis system based on a generative model of articulatory behavior is presented. Articulatory codebooks represent the acoustic‐to‐articulatory relation. During speech analysis ambiguity is effectively resolved with the aid of a finite‐state grammar of articulatory behavior. The grammar, in the form of a finite‐state acceptor, is used to constrain codebook search. This analysis procedure is part of a word recognition system, the finite‐state grammar of articulatory behavior can be derived automatically from an analysis of phone...


Journal of the Acoustical Society of America | 1985

The application of a hierarchical classification technique to speech analysis

Mark A. Randolph

In this paper we describe a hierarchical classification technique based on CART (Classification and Regression Trees, see Breiman et al., 1984) and its application to the task of speech analysis. Our investigation is motivated by two reasons. First, we believe that this technique provides an intuitive mechanism for quantifying the acoustic cues of phonetic contrasts. Second, the technique can potentially help us develop classifiers that are useful for automatic speech recognition. Towards these goals, we have added a number of features to the basic CART algorithm, and have expanded it into an exploratory data analysis tool. For example, CART uses a predetermined criterion for partitioning the feature space. We have added the capability for users to manually perform this partitioning. In addition, we have implemented a number of statistical functions for univariate and multivariate data analysis, and added graphical facilities for viewing data in different ways. Most importantly, we have integrated CART wi...


Journal of the Acoustical Society of America | 1986

The influence of phonetic context on the acoustical properties of stops

Mark A. Randolph; Victor W. Zue

In American English, stop consonants may be released, unreleased, or deleted (e.g., the phoneme /t/ in “tea,” “basketball,” and “softly,” respectively). The particular acoustical realization is almost obligatory in some environments and highly variable in others. The purpose of our study was to quantify the influence of context, including syllable structure, on the acoustical properties of stop consonants. Or database consisted of some 5200 stops collected from 1000 sentences. Phonemic transcriptions, including lexical stress and syllable markers, were provided and aligned with the speech waveforms. The stops were grouped into categories corresponding to their position within syllables (e.g., syllable‐initial‐singleton, syllable‐final‐affix, etc.) and marked according to their local phonemic context. Segment durations were measured and the stops were classified as released, unreleased, or deleted on the basis of their duration and voice onset time (VOT). In the analysis of these data, including the examin...


Journal of the Acoustical Society of America | 1982

Synthesis of continuous speech by concatenation of isolated words

Mark A. Randolph; Victor W. Zue

This paper reports a feasibility study of synthesizing continuous speech by concatentation of modified isolated word templates. Continuous speech obtained simply by word template concatenation has several inherent problems. It cannot account for the prosodic features normally found in continuous speech, nor can it account for coarticulation; the natural transitions that occur at word boundaries. In an effort to alleviate the first problems, the synthesizer is provided with information describing the duration, the fundamental frequency (f0) contour, and the energy contour of each sentence. The synthesizer draws from a dictionary of word templates each stored as a sequence of LPC parameters. Before concatenation, the time scales of the synthesis templates are nonlinearly warped using the alignment path obtained from a level building connected speech recognition algorithm. In addition, the f0 and energy contours of the word templates are modified. To account for coarticulation, the various parameters are smo...


Archive | 2004

Voice browser dialog enabler for a communication system

James C. Ferrans; Jonathan R. Engelsma; Michael Pearce; Mark A. Randolph; Jerome O. Vogedes


Archive | 2003

Dialog recognition and control in a voice browser

James C. Ferrans; Jonathan R. Engelsma; Michael Pearce; Mark A. Randolph; Jerome O. Vogedes


Journal of the Acoustical Society of America | 2007

Method and apparatus to facilitate correlating symbols to sounds

Changxue Ma; Mark A. Randolph


Archive | 2002

Method and apparatus for speech detection using time-frequency variance

Changxue Ma; Mark A. Randolph

Collaboration


Dive into the Mark A. Randolph's collaboration.

Top Co-Authors

Avatar

Victor W. Zue

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hong C. Leung

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge