Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Janet C. Rutledge is active.

Publication


Featured researches published by Janet C. Rutledge.


IEEE Engineering in Medicine and Biology Magazine | 1996

Reducing correlated noise in digital hearing aids

Nathaniel A. Whitmal; Janet C. Rutledge; Jonathan Cohen

The intelligibility of speech in communication systems is generally reduced by interfering noise. This interference, which can take the form of environmental noise, reverberation, competing speech, or electronic channel noise, reduces intelligibility by masking the signal of interest. The reduction in intelligibility is particularly troublesome for listeners with hearing impairments, who have greater difficulty understanding speech in the presence of noise than do normal-hearing listeners. Numerous digital signal processing (DSP)-based speech enhancement systems have been proposed to improve intelligibility in the presence of noise. Several of these systems have difficulty distinguishing between noise and consonants, and consequently attenuate both. Other methods, which use imprecise estimates of the noise, create audible artifacts that further mask consonants. The objective of the present study is to develop a new noise-reduction method that can reduce additive noise without impairing intelligibility. The new method could be used to improve intelligibility in a wide variety of applications, with special attention given to digital hearing aids and other portable communication systems (e.g., cellular telephones). Here, the authors present a new wavelet-based method for reducing correlated noise in noisy speech signals. The authors provide background information on the intelligibility problem and on previous attempts to address it. A theoretical framework is then proposed for reduction of correlated noise, along with some preliminary experimental results.


IEEE Transactions on Signal Processing | 1993

Wavelet analysis in recruitment of loudness compensation

Laura A. Drake; Janet C. Rutledge; Jonathan Cohen

Wavelet-based multiband dynamic range compression is developed to compensate for a common hearing impairment known as recruitment of loudness. The algorithm combines standard compression with intensity-level dependent gain calculation. Complexity and performance are similar to traditional techniques. Further, the methods established can be applied to the more adaptive wavelet packets and local cosine bases which model the speech signal more closely. >


multimedia signal processing | 1997

Frame rate and viseme analysis for multimedia applications

Jay J. Williams; Janet C. Rutledge; Dean C. Garstecki; Aggelos K. Katsaggelos

In the future, multimedia technology will be able to provide video frame rates equal to or better than 30 frames per second (FPS). Until that time the hearing impaired community will be using band limited communication systems over unshielded twisted pair copper wiring. As a result, multimedia communication systems will use a coder/decoder (CODEC) to compress the video and audio signals for transmission. For these systems to be usable by the hearing impaired community, the algorithms within the CODEC have to be designed to account for the perceptual boundaries of the hearing impaired. We investigate the perceptual boundaries of speech reading and multimedia technology, which are the constraints that effect speech reading performance. We analyze and draw conclusions on the relationship between viseme groupings, accuracy of viseme recognition, and presentation rate. These results are critical in the design of multimedia systems for the hearing impaired.


international conference on acoustics speech and signal processing | 1996

Multichannel dynamic range compression for music signals

Jon C. Schmidt; Janet C. Rutledge

Dynamic range compression reduces the level differences between the high and low intensity portions of an audio signal. Applications include hearing aids and professional audio. Many applications strive for transparent compression, where the output signal is to be as perceptually similar to the original signal as possible. Most compression algorithms are very similar, with variations existing primarily in the implementation details of a common set of parameters. This research effort introduces several fundamentally new approaches to dynamic range compression including a critical band multichannel structure, attack and release rates, a level estimate mode control, and a normalization of the level estimates across the frequency bands. This research effort also applies multidimensional scaling techniques to the analysis of subject testing performed using musical stimuli. This analysis allows us to evaluate the interactions of multiple compression parameters and to formulate a much needed metric pertaining to the degree of compression achieved.


multimedia signal processing | 1998

Frame Rate and Viseme Analysis for Multimedia Applications toAssist Speechreading

Jay J. Williams; Janet C. Rutledge; Aggelos K. Katsaggelos; Dean C. Garstecki

Current video conference and phone systems do not provide the necessary temporal resolution and motion for speechreading. In this paper the perceptual boundaries which effect speechreading performance are investigated. Analysis of the relationships between viseme groupings, accuracy of viseme recognition and presentation frame rate is presented based on the results of subject testing. Results reveal a minimum frame rate of 10 frames per second (fps) for distinguishing viseme groupings. Confusion analysis results demonstrate the importance of the tongue and teeth oral features for speechreading. These results are critical to the design of speech-assisted video systems to enhance speechreading for individuals with impaired hearing.


international conference on acoustics, speech, and signal processing | 1995

Wavelet-based noise reduction

Nathaniel A. Whitmal; Janet C. Rutledge; Jonathan Cohen

A novel method for enhancement of noisy speech is presented. Frames of speech samples are split into low and high frequency bands and projected onto a library of bases consisting of local trigonometric functions and wavelet packets. Coefficients thought to represent only the speech are selected by means of the minimum description length (MDL) criterion, and used to synthesize an estimate of the original speech. A tracking algorithm uses MDL values to choose between the MDL processor and alternate processors which reject audible artifacts. Preliminary results indicate that the new algorithm may be useful in applications requiring a single-microphone noise reduction system for speech.


Academic Medicine | 2012

Promise: Maryland's Alliance for Graduate Education and the Professoriate Enhances Recruitment and Retention of Underrepresented Minority Graduate Students

Renetta G. Tull; Janet C. Rutledge; Frances D. Carter; Jordan E. Warnick

PROMISE: Marylands Alliance for Graduate Education and the Professoriate (AGEP), sponsored by the National Science Foundation, is a consortium that is designed to increase the numbers of underrepresented minority (URM) PhDs in science, technology, engineering, and mathematics fields who will pursue academic careers. A strength of PROMISE is its alliance infrastructure that connects URM graduate students on different campuses through centralized programming for the three research universities in Maryland: the University of Maryland Baltimore County (the lead institution in the alliance), the University of Maryland College Park, and the University of Maryland Baltimore (UMB). PROMISE initiatives cover graduate student recruitment, retention, community building, PhD completion, and transition to careers.Although it is not a fellowship, PROMISE offers professional development and skill-building programs that provide academic and personal support for URM students on all three campuses. PROMISE on UMBs campus includes the School of Medicine, which sponsors tricampus programs that promote health and wellness to accompany traditional professional development programs. PROMISE uniquely and atypically includes a medical school within its alliance. The PROMISE programs serve as interventions that reduce isolation and facilitate degree completion among diverse students on each campus. This article describes details of the PROMISE AGEP and presents suggestions for replicating professional development programs for URMs in biomedical, MD/masters, and MD/PhD programs on other campuses.


international conference on acoustics, speech, and signal processing | 1995

Synthesizing styled speech using the Klatt synthesizer

Janet C. Rutledge; Kathleen E. Cummings; Daniel Lambert; Mark A. Clements

This paper reports the implementation of high-quality synthesis of speech with varying speaking styles using the Klatt (1980) synthesizer. This research is based on previously-reported research that determined that the glottal waveforms of various styles of speech are significantly and identifiably different. Given the parameter tracks that control the synthesis of a normal version of an utterance, those parameters that control known acoustic correlates of speaking style are varied appropriately, relative to normal, to synthesize styled speech. In addition to varying the parameters that control the glottal waveshape, phoneme duration, phoneme intensity, and pitch contour are also varied appropriately. Listening tests that demonstrate that the synthetic speech is perceptibly and appropriately styled, and that the synthetic speech is natural-sounding, were performed, and the results are presented.


EURASIP Journal on Advances in Signal Processing | 2009

A computational auditory scene analysis-enhanced beamforming approach for sound source separation

Laura A. Drake; Janet C. Rutledge; Jiucai Zhang; Aggelos K. Katsaggelos

Hearing aid users have difficulty hearing target signals, such as speech, in the presence of competing signals or noise. Most solutions proposed to date enhance or extract target signals from background noise and interference based on either location attributes or source attributes. Location attributes typically involve arrival angles at a microphone array. Source attributes include characteristics that are specific to a signal, such as fundamental frequency, or statistical properties that differentiate signals. This paper describes a novel approach to sound source separation, called computational auditory scene analysis-enhanced beamforming (CASA-EB), that achieves increased separation performance by combining the complementary techniques of CASA (a source attribute technique) with beamforming (a location attribute technique), complementary in the sense that they use independent attributes for signal separation. CASA-EB performs sound source separation by temporally and spatially filtering a multichannel input signal, and then grouping the resulting signal components into separated signals, based on source and location attributes. Experimental results show increased signal-to-interference ratio with CASA-EB over beamforming or CASA alone.


Journal of the Acoustical Society of America | 1996

Cepstral analysis of ‘‘cold‐speech’’ for speaker recognition: A second look

Renetta G. Tull; Janet C. Rutledge; Charles R. Larson

Speech affected by colds continues to be an issue in speaker recognition technology. This study is a continuation of the ‘‘cold‐affected’’ speech project [R. G. Tull and J. C. Rutledge, J. Acoust. Soc. Am. 99, 2549(A) (1996)] which compared ‘‘cold‐speech’’ and normal/healthy speech of a male subject to analyze differences in vowel durations and mel‐cepstral coefficients. This new study analyzes the speech of two additional male subjects during a cold and after a cold to test speaker intrasession (‘‘cold‐speech’’ within the same recording session, normal speech within the same session), and speaker intersession (‘‘cold‐speech’’ versus normal speech on different days). The sentence being used for recording is the recitation of numbers: ‘‘1 2 3 4 5 6 7 8 9 10.’’ The lower‐order mel‐cepstral coefficients are chosen as parameters (independent variable) in this study. The research for this study shows that there are patterns in the coefficients (c2 and c3) of ‘‘cold‐speech’’ that are different from the patterns...

Collaboration


Dive into the Janet C. Rutledge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark A. Clements

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge