Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Navin Viswanathan is active.

Publication


Featured researches published by Navin Viswanathan.


Psychonomic Bulletin & Review | 2009

A critical examination of the spectral contrast account of compensation for coarticulation

Navin Viswanathan; Carol A. Fowler; James S. Magnuson

Vocal tract gestures for adjacent phones overlap temporally, rendering the acoustic speech signal highly context dependent. For example, following a segment with an anterior place of articulation, a posterior segment’s place of articulation is pulled frontward, and listeners’ category boundaries shift appropriately. Some theories assume that listeners perceptually attune or compensate for coarticulatory context. An alternative is that shifts result from spectral contrast. Indeed, shifts occur when speech precursors are replaced by pure tones, frequency matched to the formant offset at the assumed locus of contrast (Lotto & Kluender, 1998). However, tone analogues differ from natural formants in several ways, raising the possibility that conditions for contrast may not exist in natural speech. When we matched tones to natural formant intensities and trajectories, boundary shifts diminished. When we presented only the critical spectral region of natural speech tokens, no compensation was observed. These results suggest that conditions for spectral contrast do not exist in typical speech.


Quarterly Journal of Experimental Psychology | 2009

Sentence comprehension affects the dynamics of bimanual coordination: Implications for embodied cognition

Anne J. Olmstead; Navin Viswanathan; Karen A. Aicher; Carol A. Fowler

Recent work in embodied cognition has demonstrated that language comprehension involves the motor system (e.g., Glenberg & Kaschak, 2002). Such findings are often attributed to mechanisms involving simulations of linguistically described events (Barsalou, 1999; Fischer & Zwaan, 2008). We propose that research paradigms in which simulation is the central focus need to be augmented with paradigms that probe the organization of the motor system during language comprehension. The use of well-studied motor tasks may be appropriate to this endeavour. To this end, we present a study in which participants perform a bimanual rhythmic task (Kugler & Turvey, 1987) while judging the plausibility of sentences. We show that the dynamics of the bimanual task differ when participants judge sentences describing performable actions as opposed to sentences describing events that are not performable. We discuss the general implications of our results for accounts of embodied cognition.


Quarterly Journal of Experimental Psychology | 2014

The role of speech-specific properties of the background in the irrelevant sound effect

Navin Viswanathan; Josh Dorsi; Stephanie George

The irrelevant sound effect (ISE) is the finding that serial recall performance is impaired under complex auditory backgrounds such as speech as compared to white noise or silence. Several findings have demonstrated that ISE occurs with nonspeech backgrounds and that the changing-state complexity of the background stimuli is critical to ISE. In a pair of experiments, we investigate whether speech-like qualities of the irrelevant background have an effect beyond their changing-state complexity. We do so by using two kinds of transformations of speech with identical changing-state complexity: one kind that preserved speech-like information (sinewave speech and fully reversed sinewave speech) and others in which this information was distorted (two selectively reversed sinewave speech conditions). Our results indicate that even when changing-state complexity is held constant, sinewave speech conditions in which speech-like interformant relationships are disrupted, produce less ISE than those in which these relationships are preserved. This indicates that speech-like properties of the background are important beyond their changing-state complexity for ISE.


Frontiers in Psychology | 2013

Comparison of native and non-native phone imitation by English and Spanish speakers

Annie J. Olmstead; Navin Viswanathan; M. Pilar Aivar; Sarath Manuel

Experiments investigating phonetic convergence in conversation often focus on interlocutors with similar phonetic inventories. Extending these experiments to those with dissimilar inventories requires understanding the capacity of speakers to imitate native and non-native phones. In the present study, we tested native Spanish and native English speakers to determine whether imitation of non-native tokens differs qualitatively from imitation of native tokens. Participants imitated a [ba]–[pa] continuum that varied in VOT from −60 ms (prevoiced, Spanish [b]) to +60 ms (long lag, English [p]) such that the continuum consisted of some tokens that were native to Spanish speakers and some that were native to English speakers. Analysis of the imitations showed two critical results. First, both groups of speakers demonstrated sensitivity to VOT differences in tokens that fell within their native regions of the VOT continuum (prevoiced region for Spanish and long lag region for English). Secondly, neither group of speakers demonstrated such sensitivity to VOT differences among tokens that fell in their non-native regions of the continuum. These results show that, even in an intentional imitation task, speakers cannot accurately imitate non-native tokens, but are clearly flexible in producing native tokens. Implications of these findings are discussed with reference to the constraints on convergence in interlocutors from different linguistic backgrounds.


Attention Perception & Psychophysics | 2018

Comparing speech and nonspeech context effects across timescales in coarticulatory contexts

Navin Viswanathan; Damian G. Kelty-Stephen

Context effects are ubiquitous in speech perception and reflect the ability of human listeners to successfully perceive highly variable speech signals. In the study of how listeners compensate for coarticulatory variability, past studies have used similar effects speech and tone analogues of speech as strong support for speech-neutral, general auditory mechanisms for compensation for coarticulation. In this manuscript, we revisit compensation for coarticulation by replacing standard button-press responses with mouse-tracking responses and examining both standard geometric measures of uncertainty as well as newer information-theoretic measures that separate fast from slow mouse movements. We found that when our analyses were restricted to end-state responses, tones and speech contexts appeared to produce similar effects. However, a more detailed time-course analysis revealed systematic differences between speech and tone contexts such that listeners’ responses to speech contexts, but not to tone contexts, changed across the experimental session. Analyses of the time course of effects within trials using mouse tracking indicated that speech contexts elicited fewer x-position flips but more area under the curve (AUC) and maximum deviation (MD), and they did so in the slower portions of mouse-tracking movements. Our results indicate critical differences between the time course of speech and nonspeech context effects and that general auditory explanations, motivated by their apparent similarity, be reexamined.


Journal of the Acoustical Society of America | 2016

Spatially separating language masker from target results in spatial and linguistic masking release

Navin Viswanathan; Kostas Kokkinakis; Brittany T. Williams

Several studies demonstrate that in complex auditory scenes, speech recognition is improved when the competing background and target speech differ linguistically. However, such studies typically utilize spatially co-located speech sources which may not fully capture typical listening conditions. Furthermore, co-located presentation may overestimate the observed benefit of linguistic dissimilarity. The current study examines the effect of spatial separation on linguistic release from masking. Results demonstrate that linguistic release from masking does extend to spatially separated sources. The overall magnitude of the observed effect, however, appears to be diminished relative to the co-located presentation conditions.


Quarterly Journal of Experimental Psychology | 2018

The role of speech fidelity in the irrelevant sound effect: Insights from noise-vocoded speech backgrounds

Josh Dorsi; Navin Viswanathan; Lawrence D. Rosenblum; James W. Dias

The Irrelevant Sound Effect (ISE) is the finding that background sound impairs accuracy for visually presented serial recall tasks. Among various auditory backgrounds, speech typically acts as the strongest distractor. Based on the changing-state hypothesis, speech is a disruptive background because it is more complex than other nonspeech backgrounds. In the current study, we evaluate an alternative explanation by examining whether the speech-likeness of the background (speech fidelity) contributes, beyond signal complexity, to the ISE. We did this by using noise-vocoded speech as a background. In Experiment 1, we varied the complexity of the background by manipulating the number of vocoding channels. Results indicate that the ISE increases with the number of channels, suggesting that more complex signals produce greater ISEs. In Experiment 2, we varied complexity and speech fidelity independently. At each channel level, we selectively reversed a subset of channels to design a low-fidelity signal that was equated in overall complexity. Experiment 2 results indicated that speech-like noise-vocoded speech produces a larger ISE than selectively reversed noise-vocoded speech. Finally, in Experiment 3, we evaluated the locus of the speech-fidelity effect by assessing the distraction produced by these stimuli in a missing-item task. In this task, even though noise-vocoded speech disrupted task performance relative to silence, neither its complexity nor speech fidelity contributed to this effect. Together, these findings indicate a clear role for speech fidelity of the background beyond its changing-state quality and its attention capture potential.


Psychonomic Bulletin & Review | 2018

Lexical exposure to native language dialects can improve non-native phonetic discrimination

Annie J. Olmstead; Navin Viswanathan

Nonnative phonetic learning is an area of great interest for language researchers, learners, and educators alike. In two studies, we examined whether nonnative phonetic discrimination of Hindi dental and retroflex stops can be improved by exposure to lexical items bearing the critical nonnative stops. We extend the lexical retuning paradigm of Norris, McQueen, and Cutler (Cognitive Psychology, 47, 204–238, 2003) by having naive American English (AE)-speaking participants perform a pretest-training-posttest procedure. They performed an AXB discrimination task with the Hindi retroflex and dental stops before and after transcribing naturally produced words from an Indian English speaker that either contained these tokens or not. Only those participants who heard words with the critical nonnative phones improved in their posttest discrimination. This finding suggests that exposure to nonnative phones in native lexical contexts supports learning of difficult nonnative phonetic discrimination.


Journal of the Acoustical Society of America | 2018

Does it have to be correct?: The effect of uninformative feedback on non-native phone discrimination

Annie J. Olmstead; Navin Viswanathan

Learning the phonetic inventory of a non-native language requires perceptual adjustment to non-native phones that sometimes belong to a single category in the learner’s native language. For example, English native speakers often struggle to learn the distinction between the Hindi phonemes [ʈ] and [t] that are both categorized as [t] in American English (AE). Olmstead and Viswanathan (2017) showed that AE listener’s discrimination of these non-native phones could be improved using short exposure to naturally produced Indian English (IE) words that contained the target contrast. In the current study, we examine how feedback affects this lexical retuning effect. Specifically, we set up feedback schedules that either reinforce the consistent mapping of these consonants onto the AE speaker’s existing [t] and [θ], or that reinforce an inconsistent mapping. If the consistency of this mapping in IE is paramount to improving phonetic discrimination, then reinforcing it should strengthen the effect and providing a...


Journal of the Acoustical Society of America | 2018

Evaluating mechanisms underlying nonspeech context effects in coarticulatory compensation

Navin Viswanathan

Human speech listeners overcome variability in the speech signal due to different speakers, rates, phonetic contexts etc., by demonstrating context-appropriate perceptual shifts. From a general auditory perspective, perceptual systems heighten contrastive spectral and temporal properties of the acoustic signal to help listeners cope with variability (Diehl et al., 2004). In this study, I focus on a spectral contrast account of perceptual coarticulatory compensation and evaluate the claim that listeners cope with coarticulation by tracking spectral averages across multiple segments (Holt, 2005) while ignoring spectral variability (Holt, 2006). In Experiment 1, using tone analogue contexts (Lotto & Kluender, 1998) I created multi-tone contexts such that the global spectral average of each trial was pit against the frequency of the final tone that immediately preceded the target speech. Interestingly, nonspeech context effects were determined by the frequency of the final tone rather than the global trial av...

Collaboration


Dive into the Navin Viswanathan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol A. Fowler

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Josh Dorsi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James W. Dias

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge