Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Lindemann is active.

Publication


Featured researches published by Eric Lindemann.


Journal of the Acoustical Society of America | 2002

Musical synthesizer capable of expressive phrasing

Eric Lindemann

The present invention describes a device and methods for synthesizing a musical audio signal. The invention includes a device for storing a collection of sound segments taken from idiomatic musical performances. Some of these sound segments include transitions between musical notes such as the slur from the end of one note to the beginning of the next. Much of the complexity and expressivity in musical phrasing is associated with the complex behavior of these transition segments. The invention further includes a device for generating a sequence of sound segments in response to an input control sequence--e.g. a MIDI sequence. The sound segments are associated with musical gesture types. The gesture types include attack, release, transition, and sustain. The sound segments are further associated with musical gesture subtypes. Large upward slur, small upward slur, large downward slur, and small downward slur are examples of subtypes of the transition gesture type. Event patterns in the input control sequence lead to the generation of a sequence of musical gesture types and subtypes, which in turn leads to the selection of a sequence of sound segments. The sound segments are combined to form an audio signal and played out by a sound segment player. The sound segment player pitch-shifts and intensity-shifts the sound segments in response to the input control sequence.


Journal of the Acoustical Society of America | 2002

Encoding and synthesis of tonal audio signals using dominant sinusoids and a vector-quantized residual tonal signal

Eric Lindemann

Tonal audio signals can be modeled as a sum of sinusoids with time-varying frequencies, amplitudes, and phases. An efficient encoder and synthesizer of tonal audio signals is disclosed. The encoder determines time-varying frequencies, amplitudes, and, optionally, phases for a restricted number of dominant sinusoid components of the tonal audio signal to form a dominant sinusoid parameter sequence. These components are removed from the tonal audio signal to form a residual tonal signal. The residual tonal signal is encoded using a residual tonal signal encoder (RTSE). In one embodiment, the RTSE generates a vector quantization codebook (VQC) and residual codebook sequence (RCS). The VQC may contain time-domain residual waveforms selected from the residual tonal signal, synthetic time-domain residual waveforms with magnitude spectra related to the residual tonal signal, magnitude spectrum encoding vectors, or a combination of time-domain waveforms and magnitude spectrum encoding vectors. The tonal audio signal synthesizer uses a sinusoidal oscillator bank to synthesize a set of dominant sinusoid components from the dominant sinusoid parameter sequence generated during encoding. In one embodiment, a residual tonal signal is synthesized using a VQC and RCS generated by the RTSE during encoding. If the VQC includes time-domain waveforms, an interpolating residual waveform oscillator may be used to synthesize the residual tonal signal. The synthesized dominant sinusoids and synthesized residual tonal signal are summed to form the synthesized tonal audio signal.


Journal of the Acoustical Society of America | 1996

Time varying frequency‐domain Wiener filtering based on binaural cues: A method of hearing aid noise reduction

Eric Lindemann

Environmental noises such as cocktail party babble, traffic, and restaurant clatter are a continuing source of irritation to hearing aid wearers. One approach to reducing this noise is to provide the wearer with a directional field of hearing. Sounds coming from the back and sides are attenuated, while sounds coming from the front, or look direction, are unattenuated. Several approaches to designing such systems have been attempted: directional microphones, passive microphone arrays, and classical adaptive filter microphone array beamformers. These techniques are compromised by poor performance in reverberant environments, undesirable packaging constraints, and large computational requirements. This paper describes an efficient algorithm for a two‐microphone directional hearing system based on time‐varying frequency‐domain Wiener filtering. The frequency‐dependent SNR estimates necessary to determine the time‐varying frequency‐dependent attenuation of the Wiener filter are based on interaural time‐of‐arri...


Journal of the Acoustical Society of America | 1994

Evaluation of a prototype beamforming binaural hearing aid

Mark Terry; Chris Schweitzer; Eric Lindemann; John L. Melanson

A computerized version of the California consonant test [Terry et al., Ear Hear. 13, 70–79 (1992)] was used to evaluate a wearable digital beam‐forming binaural hearing aid. The binaural aid uses an analysis–synthesis method where phase and magnitude cues, derived from microphone placement at the ears, are used to attenuate sounds from locations other than the 0‐deg azimuth position. Normal hearing subjects, with their left ear occluded to reduce binaural cues, were used together with hearing impaired subjects for evaluation. Speech at 0 azimuth was presented via loudspeaker at 50 dBA. In the noise condition four talker babble at 35‐deg azimuth (54 dBA) was mixed with the target speech. The hearing aid was programmed to give a basic 6‐dB/oct preemphasis. Overall gain for the hearing impaired subjects was initially adjusted to give approximately a 70% intelligibility score for the speech alone condition. Subjects were instructed to maintain body and head position while responding to word choices shown on a...


Journal of the Acoustical Society of America | 2011

Virtual instrument player using probabilistic gesture grammar

Eric Lindemann

Expressive instruments such as violin require subtle control gestures: variations in bow velocity and pressure over the course of a note, changes in vibrato depth and speed, portamento gestures, etc. Music synthesizers that attempt to emulate these instruments are often driven from keyboard controllers or directly from score editor or sequencer software. These synthetic control sources do not generally provide the level of detailed control required by expressive instruments. Even if a synthesizer offers a rich set of realistic continuous control inputs, the effort and skill required by the user to manage these controls, e.g., by drawing expression and vibrato controller curves in a MIDI sequencer, is considerable. Often the user would prefer to treat the synthesizer as a combination instrument and virtual player. The virtual player receives limited score-like control input and infers the more detailed gestural control. This presentation describes a virtual player mechanism that employs a probabilistic not...


Journal of the Acoustical Society of America | 1993

Release‐from‐masking effects provided by a hearing aid digital signal processor

Michael Grim; Christopher Schwietzer; Eric Lindemann; Richard H. Sweetman

Release‐from‐masking effects provided by a digital hearing aid signal processor utilizing multiple‐microphone inputs were evaluated with three measures of speech recognition. Speech recognition measures were monosyllabic word recognition score, reaction time, and subjective rating of the intelligibility of selected passages of continuous discourse. Microphones, placed on KEMAR, recorded speech stimuli embedded in cafeteria noise yielding S/N ratios of 0 and 8 dB SPL. Half of the speech‐in‐noise stimuli were processed through the hearing aid digital signal processor while half remained unprocessed. The hearing aid processor utilizes a technology similar to adaptive‐beamforming to reduce the masking effects of background noise on speech recognition. Unprocessed and processed speech‐in‐noise stimuli were presented to 10 normally hearing subjects, 10 hearing‐impaired subjects, and 10 hearing‐impaired individuals fit with linear amplification. Comparison of word recognition scores, reaction times, and intellig...


Journal of the Acoustical Society of America | 2006

Digital hearing aid system

Eric Lindemann; John L. Melanson; Nikolai Bisgaard


Journal of the Acoustical Society of America | 1996

Binaural hearing aid

Eric Lindemann; John L. Melanson


Archive | 2001

Digital wireless loudspeaker system

Eric Lindemann; John L. Melanson; Jason Lee Carlson; James M. Kates


Journal of the Acoustical Society of America | 1995

Hearing aid with in situ testing capability

Eric Lindemann; John L. Melanson

Collaboration


Dive into the Eric Lindemann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James M. Kates

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge