Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radhika Aravamudhan is active.

Publication


Featured researches published by Radhika Aravamudhan.


Otology & Neurotology | 2014

Preservation of auditory brainstem response thresholds after cochleostomy and titanium microactuator implantation in the lateral wall of cat scala tympani.

S. George Lesinski; Jessica Prewitt; Victor Bray; Radhika Aravamudhan; Oscar A. Bermeo Blanco; Brenda L. Farmer-Fedor; Jonette A. Ward

Hypothesis The safety of implanting a titanium microactuator into the lateral wall of cat scala tympani was assessed by comparing preoperative and postoperative auditory brainstem response (ABR) thresholds for 1 to 3 months. Background The safety of directly stimulating cochlear perilymph with an implantable hearing system requires maintaining preoperative hearing levels. This cat study is an essential step in the development of the next generation of fully implantable hearing devices for humans. Methods Following GLP surgical standards, a 1-mm cochleostomy was drilled into the lateral wall of the scala tympani, and a nonfunctioning titanium anchor/microactuator assembly was inserted in 8 cats. The scala media was damaged in the 1 cat. ABR thresholds with click and 4- and 8-kHz stimuli were measured preoperatively and compared with postoperative thresholds at 1, 2, and 3 months. Nonimplanted ear thresholds were also measured to establish statistical significance for threshold shifts (>28.4 dB). Two audiologists independently interpreted thresholds. Results Postoperatively, 7 cats implanted in the scala tympani demonstrated no significant ABR threshold shift for click stimulus; one shifted ABR thresholds to 4- and 8-kHz stimuli. The eighth cat, with surgical damage to the scala media, maintained stable click threshold but had a significant shift to 4- and 8-kHz stimuli. Conclusion This cat study provides no evidence of worsening hearing thresholds after fenestration of the scala tympani and insertion of a titanium anchor/microactuator, provided there is no surgical trauma to the scala media and the implanted device is securely anchored in the cochleostomy. These 2 issues have been resolved in the development of a fully implantable hearing system for humans. The long-term hearing stability (combined with histologic studies) reaffirm that the microactuator is well tolerated by the cat cochlea.


Journal of the Acoustical Society of America | 2004

Perceptual overshoot in listeners with cochlear implants

Radhika Aravamudhan; Andrew J. Lotto

Perceptual overshoot (PO), a phenomenon in which the boundary for perceived vowel categories shifts as a result of preceding formant transitions, has been demonstrated previously with synthetic vowels in CV contexts in normal‐hearing individuals. In one of the previous studies by the author, the same phenomenon was tested with sinewave analogs that mimicked the synthetic vowels. The results demonstrated that PO could be elicited for sinewave analogs after training on categorization of sinewave steady states. These findings suggest that PO may be partly mediated by general processes in the auditory system. In the current study, subjects with cochlear implants were presented with synthetic vowel and sinewave continua with and without transitions. It was predicted that PO would be greatly diminished or absent for this population because of the degraded nature of the spectral input. Implications of the results for theories of speech perception will be presented.


Journal of the Acoustical Society of America | 2010

Presence of preceding sound affects the neural representation of speech sounds: Behavioral data.

Kathy M. Carbonell; Radhika Aravamudhan; Andrew J. Lotto

Traditionally, context‐sensitive speech perception has been demonstrated by eliciting shifts in target sound categorization through manipulation of the phonemic/spectral content of surrounding context. For example, changing the third formant frequency of a preceding context (from /al/ to /ar/) can result in significant shifts in target categorization (from /ga/ to /da/). However, it is probable that the most salient difference in context is between the presence or absence of any other sound. The question becomes whether this large change in context has substantial effects on target categorization as well. In the current study, participants were asked to categorize members of a series of syllables varying from /ga/ to /da/ presented in isolation or following /al/, /ar/, or /a/. The typical shifts in categorization were obtained for /al/ vsersus /ar/ contexts, but the shift in response between isolated presentation and any of the audible context conditions was much larger (with more /da/ responses in isolat...


Journal of the Acoustical Society of America | 2005

Phonetic context effects in adult listeners with cochlear implants

Radhika Aravamudhan; Andrew J. Lotto

From previous studies it is known that normal‐hearing (NH) listeners have the ability to compensate for the acoustic variability present in speech through context‐dependent perception of speech sounds. One question of practical and theoretical interest is whether listeners with cochlear implants (CI) also show context‐dependent speech perception. Because of the lack of spectral resolution in the input, the representation of speech for CI listeners may differ from NH listeners, which may interfere with perceptual compensation. In a test of this prediction, adult postlingually deafened CI listeners did not demonstrate the contrastive context effects elicited from NH listeners for either /da/–/ga/ targets and /al/–/ar/ contexts or V targets and /b—b/–/d—d/ contexts. In contrast, as predicted by the good temporal resolution of the CI signal, CI listeners showed normal effects of vowel length on preceding glide‐stop categorization. CI simulations with NH listeners were also performed for some of these context ...


Journal of the Acoustical Society of America | 2003

Perceptual overshoot with speech and nonspeech sounds

Radhika Aravamudhan; John W. Hawks

One of the basic quests in speech perception research has been to find the differences or similarities in the mechanisms involved in the perception of speech and nonspeech sounds. The current study will address the differences in perception of speech and nonspeech signals by comparing the perceptual overshoot in synthetic vowels and sinewave acoustic replicas of the synthetic vowels. Lindblom and Studdert‐Kennedy (1967) demonstrated that the perceptual boundary for steady state vowels and that for vowels in a CV context with F2 transition are different. They called this phenomenon a perceptual compensation or perceptual overshoot. In the current study the perceptual boundaries for synthetic steady state vowels, steady state sinewave acoustic replicas of vowels, a vowel in the CV context with F2 transition and sinewave acoustic replicas of vowels in the CV context are compared. The results will be discussed in the poster. For Speech Communication Best Student Paper Award.


Journal of the Acoustical Society of America | 2010

Presence of preceding sound affects the neural representation of speech sounds: Frequency following response data.

Radhika Aravamudhan; Kathy M. Carbonell; Andrew J. Lotto

A substantial body of literature has focused on context effects in speech perception in which manipulation of the phonemic or spectral content of preceding sounds (e.g., /al/ versus /ar/) result in a shift in the perceptual categorization of a target syllable (e.g., /da/ versus /ga/). In a previous study utilizing the frequency‐following response (FFR) to measure neural correlates of these context effects [R. Aravamudhan, J. Acoust. Soc. Am. 126, 2204], it was noted that the representation of target formant trajectories were much weaker when the stimulus was presented in isolation versus following some type of context. To examine this effect explicitly, a series of syllables varying from /da/ to /ga/ was presented to listeners either in isolation or following the syllables /a/, /al/, or /ar/ (with a 50‐ms silent gap between context and target). FFR measures were obtained from EEG recordings while participants listened passively. The resulting narrow‐band spectrograms over the grand averages demonstrated t...


Journal of the Acoustical Society of America | 2009

Neural representation of speech sounds: Study using frequency following response.

Radhika Aravamudhan

The neural encoding of an acoustic signal begins in the auditory nerve and travels to the auditory brainstem and further to the auditory cortex. Previous studies have used nonspeech signals like tones and clicks to evaluate the integrity and synchrony of the auditory pathway. Most of the previous research has focused on how any acoustic signal is perceived by using behavioral methods. One of the main areas in understanding how we hear the signal depends on how this acoustic signal is represented in the auditory pathway. A number of studies have studied the acoustic parameters that influence the perception of speech, but very few of them have focused on how these acoustic changes are represented in the auditory pathway. Since the signal representation in the auditory pathway is very crucial to how the signal is perceived, studies that focus on the relationship between the changes in the input acoustics and its influence on neural representation become very essential. In the current project the neural repre...


Journal of the Acoustical Society of America | 2007

Phonetic context effects in normal‐hearing listeners using acoustic simulations of cochlear implant signal

Radhika Aravamudhan; Andrew J. Lotto

Recent work has indicated that the perception of speech sounds shifts as a function of preceding and following context and that these shifts are due in part to the particular spectral makeup of the context sounds [Holt et al., J. Acoust. Soc. Am. 108, 710–722 (2000)]. Aravamudhan and Lotto [J. Acoust. Soc. Am. 118, 1962–1963 (2005)] studied these context effects in listeners with cochlear implants (CI) and the results demonstrated absent or abnormal context‐dependent speech perception. The lack of normal context effects in CI users may have practical implications for situations in which there is substantial coarticulation (e.g., nonlaboratory speech) or talker variability (e.g., switching between multiple speakers). Since the representation of speech in CI listeners differs from NH listeners, the current study was designed to investigate the effects of processing normal signals through a CI processor with NH listeners. In particular, we investigated how the spectral resolution of implant input could limit...


Journal of the Acoustical Society of America | 2005

The influence of stress on some acoustic correlates to the stop voicing distinction in French

Nassima Abdelli‐Beruh; Radhika Aravamudhan

This study examined how monolingual French speakers produced the stop voicing distinction in stressed and unstressed syllable‐initial stops. Syllables were embedded in sentences. Voicing‐related differences in durations of VOT, closure and vowel were calculated and analyzed as a function of stress (stressed on the target syllable, stress on the syllable preceding the target syllable). Percentages of closures with voicing were tallied as function of the voicing category of the stops and the stressed condition. Results from ANOVA showed that the absolute durations were smaller in the unstressed than in the stressed condition. The magnitude of the voicing‐conditioned related duration differences in VOT, closure and vowel were also influenced by stress.


Journal of the Acoustical Society of America | 2005

The influence of stress on the /d/–/t/ distinction in French

Nassima Abdelli‐Beruh; Radhika Aravamudhan

This study examined how monolingual French speakers produced the /d/–/t/ distinction in stressed and unstressed syllable‐initial stops preceded by a voiceless phone (/s/). Syllables were embedded in sentences. Sentence durations and voicing‐related differences in durations of preceding vowel, /s/, stop closure, and VOT were calculated and analyzed as a function of the stress condition separately for each speaker (stressed syllables spoken at normal speaking rate, unstressed syllables produced at normal speaking rate). Preliminary analyses reveal that the vowel and the voiceless fricative preceding the unstressed target syllables were longer than the vowel and the fricative preceding the stressed target syllables. Closure durations were also longer in the unstressed condition than in the stressed condition. However, voicing‐related duration differences were not systematically affected by stress. Finally, the voicing of /s/ (/s/ before /d/) and of /d/ closures, which occurred frequently in the stress condit...

Collaboration


Dive into the Radhika Aravamudhan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge