Joseph D. W. Stephens
North Carolina Agricultural and Technical State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph D. W. Stephens.
Memory & Cognition | 2002
Kenneth J. Malmberg; Mark Steyvers; Joseph D. W. Stephens; Richard M. Shiffrin
Rare words are usually better recognized than common words, a finding in recognition memory known as theword-frequency effect. Some theories predict the word-frequency effect because they assume that rare words consist of more distinctive features than do common words (e.g., Shiffrin & Steyverss, 1997, REM theory). In this study, recognition memory was tested for words that vary in the commonness of their orthographic features, and we found that recognition was best for words made up of primarily rare letters. In addition, a mirror effect was observed: Words with rare letters had a higher hit rate and a lower false-alarm rate than did words with common letters. We also found that normative word frequency affects recognition independently of letter frequency. Therefore, the distinctiveness of a words orthographic features is one, but not the only, factor necessary to explain the word-frequency effect.
Journal of the Acoustical Society of America | 2003
Joseph D. W. Stephens; Lori L. Holt
A discrimination paradigm was used to detect the influence of phonetic context on speech (experiment 1a) and nonspeech (experiment 1b) stimuli. Results of experiment 1a were consistent with the previously observed phonetic context effect of liquid consonants (/l/ and /r/) on subsequent stop consonant (/g/ and /d/) perception. Experiment 1b demonstrated a context effect of liquid consonants on subsequent nonspeech sounds that were spectrally similar to the stop consonants. The results are consistent with findings that implicate spectral contrast in phonetic context effects.
Speech Communication | 2011
Joseph D. W. Stephens; Lori L. Holt
Linear predictive coding (LPC) analysis was used to create morphed natural tokens of English voiced stop consonants ranging from /b/ to /d/ and /d/ to /g/ in four vowel contexts (/i/, /æ/, /a/, /u/). Both vowel-consonant-vowel (VCV) and consonant-vowel (CV) stimuli were created. A total of 320 natural-sounding acoustic speech stimuli were created, comprising 16 stimulus series. A behavioral experiment demonstrated that the stimuli varied perceptually from /b/ to /d/ to /g/, and provided useful reference data for the ambiguity of each token. Acoustic analyses indicated that the stimuli compared favorably to standard characteristics of naturally-produced consonants, and that the LPC morphing procedure successfully modulated multiple acoustic parameters associated with place of articulation. The entire set of stimuli is freely available on the Internet (http://www.psy.cmu.edu/~lholt/php/StephensHoltStimuli.php) for use in research applications.
Attention Perception & Psychophysics | 2005
Lori L. Holt; Joseph D. W. Stephens; Andrew J. Lotto
Fowler, Brown, and Mann (2000) have reported a visually moderated phonetic context effect in which a video disambiguates an acoustically ambiguous precursor syllable, which, in turn, influences perception of a subsequent syllable. In the present experiments, we explored this finding and the claims that stem from it. Experiment 1 failed to replicate Fowler et al. with novel materials modeled after the original study, but Experiment 2 successfully replicated the effect, using Fowler et al.’s stimulus materials. This discrepancy was investigated in Experiments 3 and 4, which demonstrate that variation in visual information concurrent with the test syllable is sufficient to account for the original results. Fowler et al.’s visually moderated phonetic context effect appears to have been a demonstration of audiovisual interaction between concurrent stimuli, and not an effect whereby preceding visual information elicits changes in the perception of subsequent speech sounds.
Journal of the Acoustical Society of America | 2010
Joseph D. W. Stephens; Lori L. Holt
Visual information from a speakers face profoundly influences auditory perception of speech. However, relatively little is known about the extent to which visual influences may depend on experience, and extent to which new sources of visual speech information can be incorporated in speech perception. In the current study, participants were trained on completely novel visual cues for phonetic categories. Participants learned to accurately identify phonetic categories based on novel visual cues. These newly-learned visual cues influenced identification responses to auditory speech stimuli, but not to the same extent as visual cues from a speakers face. The novel methods and results of the current study raise theoretical questions about the nature of information integration in speech perception, and open up possibilities for further research on learning in multimodal perception, which may have applications in improving speech comprehension among the hearing-impaired.
Psychology and Aging | 2018
Joseph D. W. Stephens; Amy A. Overman
In this article, we apply the REM model (Shiffrin & Steyvers, 1997) to age differences in associative memory. Using Criss and Shiffrin’s (2005) associative version of REM, we show that in a task with pairs repeated across 2 study lists, older adults’ reduced benefit of pair repetition can be produced by a general reduction in the diagnosticity of information stored in memory. This reduction can be modeled similarly well by reducing the overall distinctiveness of memory features, or by reducing the accuracy of memory encoding. We report a new experiment in which pairs are repeated across 3 study lists and extend the model accordingly. Finally, we extend the model to previously reported data using the same task paradigm, in which the use of a high-association strategy introduced proactive interference effects in young adults but not older adults. Reducing the diagnosticity of information in memory also reduces the proactive interference effect. Taken together, the modeling and empirical results reported here are consistent with the claim that some age differences that appear to be specific to associative information can be produced via general degradation of information stored in memory. The REM model provides a useful framework for examining age differences in memory as well as harmonizing seemingly conflicting prior modeling approaches for the associative deficit.
Psychonomic Bulletin & Review | 2017
Amy A. Overman; Alison G. Richard; Joseph D. W. Stephens
Self-generation of information during memory encoding has large positive effects on subsequent memory for items, but mixed effects on memory for contextual information associated with items. A processing account of generation effects on context memory (Mulligan in Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 838–855, 2004; Mulligan, Lozito, & Rosner in Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 836–846, 2006) proposes that these effects depend on whether the generation task causes any shift in processing of the type of context features for which memory is being tested. Mulligan and colleagues have used this account to predict various negative effects of generation on context memory, but the account also predicts positive generation effects under certain circumstances. The present experiment provided a critical test of the processing account by examining how generation affected memory for auditory rather than visual context. Based on the processing account, we predicted that generation of rhyme words should enhance processing of auditory information associated with the words (i.e., voice gender), whereas generation of antonym words should have no effect. These predictions were confirmed, providing support to the processing account.
Journal of the Acoustical Society of America | 2006
Joseph D. W. Stephens; Lori L. Holt
The integration of information across modalities is a key component of behavior in everyday settings. The current study examined the extent to which experience drives multimodal speech integration. Two groups of participants were trained on combinations of speech sounds with corresponding videos of an animated robot, whose movements and features bore no resemblance to speech articulators. Participants’ identification of acoustically presented consonants was influenced by simultaneous presentation of learned visual stimuli in a manner that reflected the correlation structure of auditory and visual cues in training. The influence of novel non face visual cues on speech perception developed over the course of training, suggesting that experience altered the perceptual mechanisms used in combining this cross‐modal information. Pairings of auditory and visual cues given to two groups of participants resulted in patterns of bimodal perception that differed in systematic ways. Perceptual integration of the newly...
Journal of the Acoustical Society of America | 2002
Joseph D. W. Stephens; Lori L. Holt
Data from Japanese quail suggest that the effect of preceding liquids (/l/ or /r/) on response to subsequent stops (/g/ or /d/) arises from general auditory processes sensitive to the spectral structure of sound [A. J. Lotto, K. R. Kluender, and L. L. Holt, J. Acoust. Soc. Am. 102, 1134–1140 (1997)]. If spectral content is key, appropriate nonspeech sounds should influence perception of speech sounds and vice versa. The former effect has been demonstrated [A. J. Lotto and K. R. Kluender, Percept. Psychophys. 60, 602–619 (1998)]. The current experiment investigated the influence of speech on the perception of nonspeech sounds. Nonspeech stimuli were 80‐ms chirps modeled after the F2 and F3 transitions in /ga/ and /da/. F3 onset was increased in equal steps from 1800 Hz (/ga/ analog) to 2700 Hz (/da/ analog) to create a ten‐member series. During AX discrimination trials, listeners heard chirps that were three steps apart on the series. Each chirp was preceded by a synthesized /al/ or /ar/. Results showed co...
Journal of Memory and Language | 2012
Dahee Kim; Joseph D. W. Stephens; Mark A. Pitt
Collaboration
Dive into the Joseph D. W. Stephens's collaboration.
North Carolina Agricultural and Technical State University
View shared research outputsNorth Carolina Agricultural and Technical State University
View shared research outputs