Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wieslaw Woszczyk is active.

Publication


Featured researches published by Wieslaw Woszczyk.


Journal of the Acoustical Society of America | 2005

Sound-field reproduction in-room using optimal control techniques: Simulations in the frequency domain

Philippe-Aubert Gauthier; Alain Berry; Wieslaw Woszczyk

This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the rooms natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.


workshop on applications of signal processing to audio and acoustics | 2005

A "Tonmeister" approach to the positioning of sound sources in a multichannel audio system

Jonas Braasch; Wieslaw Woszczyk

In this paper, the virtual implementation of classic microphone techniques is described. The system is presently being used to address an array of 24 ribbon loudspeakers. In the newly designed virtual environment, the microphones, with adjustable directivity patterns and axis orientations, can be spatially placed as desired. The system architecture was designed to comply with the augmented ITU surround-sound loudspeaker placement and to create sound imagery similar to that associated with standard sound recording practice. The audio environment is used with spot-mic recordings in two-way Internet audio transmissions to avoid feedback loops and to provide dynamic placement for a number of sources


Journal of the Acoustical Society of America | 2005

Wave field synthesis, adaptive wave field synthesis and ambisonics using decentralized transformed control: Potential applications to sound field reproduction and active noise control

Philippe-Aubert Gauthier; Alain Berry; Wieslaw Woszczyk

Sound field reproduction finds applications in listening to prerecorded music or in synthesizing virtual acoustics. The objective is to recreate a sound field in a listening environment. Wave field synthesis (WFS) is a known open‐loop technology which assumes that the reproduction environment is anechoic. Classical WFS, therefore, does not perform well in a real reproduction space such as room. Previous work has suggested that it is physically possible to reproduce a progressive wave field in‐room situation using active control approaches. In this paper, a formulation of adaptive wave field synthesis (AWFS) introduces practical possibilities for an adaptive sound field reproduction combining WFS and active control (with WFS departure penalization) with a limited number of error sensors. AWFS includes WFS and closed‐loop ‘‘Ambisonics’’ as limiting cases. This leads to the modification of the multichannel filtered‐reference least‐mean‐square (FXLMS) and the filtered‐error LMS (FELMS) adaptive algorithms for...


ieee global conference on consumer electronics | 2013

Rendering an immersive sound field using a virtual height loudspeaker: Effect of height-related room impulse responses

Sungyoung Kim; Doyuen Ko; Wieslaw Woszczyk; Hiraku Okumura

In order to provide consumers more enhanced and immersive experience in sound, most of the new multichannel reproduction formats highlight the significance of height-related information. In this paper, we investigated the influence of height-related room impulse responses when reproduced via various “height-loudspeakers,” including a virtual loudspeaker. Test participants listened to the corresponding sound fields and rated their perceived quality in terms of spaciousness and integrity. The results showed that perceived quality was affected by height loudspeaker positions and height signals, which was a specific room impulse response coupled with a virtual loudspeaker rendering process.


Journal of the Acoustical Society of America | 2006

Acoustic rendering of a virtual environment based on virtual microphone control and binaural room scanning

Jonas Braasch; William L. Martens; Wieslaw Woszczyk

Binaural room scanning (BRS) is a convolution technique that utilizes a set of spatially indexed impulse responses that are selected in response to listener head movements during virtual acoustic rendering, such that the direction of sonic elements is automatically updated according to the angle determined by a head tracker. Since the room impulse responses have to be measured for only a few loudspeaker positions, a very successful application of BRS is the simulation of a control room. In order to create a flexible headphone‐based virtual environment, it is proposed to simulate the input signals for the BRS system using virtual microphone control (ViMiC). ViMiC renders a virtual environment by simulating the output signals of virtual microphones that can be used to address a surround loudspeaker system. The advantages of this approach are twofold. First, the measured impulse responses of the BRS system ensure high spatial density in overall reflection patterns, avoiding the typical gaps in between the ea...


Journal of the Acoustical Society of America | 2005

The effects of acoustical treatment on lateralization of low‐frequency sources

Timothy J. Ryan; William L. Martens; Wieslaw Woszczyk

Recently, the standard of using a single low‐frequency driver in stereophonic sound reproduction systems has come into question. Though it is accepted that lateral discrimination and localization of signals is possible well into the subwoofer frequency range, the use of multiple subwoofers in small reverberant rooms remains of questionable value. While inter‐aural level differences (ILDs) are negligible at low frequencies, source lateralization is possible at low frequencies by virtue of inter‐aural time differences (ITDs). But when such reproduction is attempted in small rooms, strong early reflections and resonances associated with room modes can cause erroneous ITD information to be detected by a listener, thereby compromising a listener’s ability to accurately locate the source of a low‐frequency sound. Acoustical treatment can be employed to reduce the level of early reflections and low‐frequency ringing associated with sharp resonant modes in small rooms. Such acoustical treatment often results in m...


Journal of the Acoustical Society of America | 2004

Capturing the acoustic response of historical spaces for interactive music performance and recording

Wieslaw Woszczyk; William L. Martens

Performers engaged in musical recording while they are located in relatively dry recording studios generally find their musical performance facilitated when they are provided with synthetic reverberation. This well established practice is extended in the project described here to include highly realistic virtual acoustic recreation of original rooms in which Haydn taught his students to play pianoforte. The project has two primary components, the first of which is to capture for posterity the acoustic response of such historical rooms that may no longer be available or functional for performance. The project’s second component is to reproduce as accurately as possible the virtual acoustic interactions between a performer and the re‐created acoustic space, as performers, during their performance, move relative to their instrument and the boundaries of surrounding enclosure. In the first of two presentations on this ongoing project, the method for measurement of broadband impulse responses for these histori...


Journal of the Acoustical Society of America | 2004

A multifilter approach to acoustic echo cancellation

John Usher; Wieslaw Woszczyk; Jeremy R. Cooperstock

Hands‐free teleconferencing is increasingly frequent today. An important design consideration for any such communication tool that uses high‐quality audio is the return echo caused by the acoustic coupling between the loudspeakers and microphones at each end of the conference. An echo‐suppression filter (ESF) reduces the level of this return echo, increasing speech intelligibility. A new ESF has been designed based on a block frequency domain adaptive filter using the well‐known least‐mean‐square (LMS) criteria. There are two important coefficients in LMS adaptive filters which affect how an ESF adapts to changing acoustic conditions at each end of the conference, such as double‐talk conditions and moving electroacoustic transducers. Previous approaches to similar ESFs have used either a single or double pair of these coefficients, whereas the new model typically uses ten. The performance of single, double, and multifilter architectures was compared. Performance was evaluated using both empirical measurements and subjective listening tests. Speech and music were used as the stimuli for a two‐way teleconferencing experiment. The new filter performed better than the single‐ and two‐filter ESF designs, especially in conferencing conditions with frequent double talk, and the new ESF can be optimized to suit different acoustic situations.Hands‐free teleconferencing is increasingly frequent today. An important design consideration for any such communication tool that uses high‐quality audio is the return echo caused by the acoustic coupling between the loudspeakers and microphones at each end of the conference. An echo‐suppression filter (ESF) reduces the level of this return echo, increasing speech intelligibility. A new ESF has been designed based on a block frequency domain adaptive filter using the well‐known least‐mean‐square (LMS) criteria. There are two important coefficients in LMS adaptive filters which affect how an ESF adapts to changing acoustic conditions at each end of the conference, such as double‐talk conditions and moving electroacoustic transducers. Previous approaches to similar ESFs have used either a single or double pair of these coefficients, whereas the new model typically uses ten. The performance of single, double, and multifilter architectures was compared. Performance was evaluated using both empirical measurem...


Journal of the Acoustical Society of America | 2004

Virtual acoustic reproduction of historical spaces for interactive music performance and recording

William L. Martens; Wieslaw Woszczyk

For the most authentic and successful musical result, a performer engaged in recording pianoforte pieces of Haydn needs to hear the instrument as it would have sounded in historically typical room reverberation, such as that of the original room’s in which Haydn taught his students to play pianoforte. After capturing the acoustic response of such historical room’s, as described in the companion presentation, there remains the problem of how best to reproduce the virtual acoustical response of the room as a performer moves relative to the instrument and the rooms boundaries. This can be done with a multichannel loudspeaker array enveloping the performer, interactively presenting simulated indirect sound to generate a sense of presence in the previously captured room. The resulting interaction between live musical instrument performance and the sound of the virtual room can be captured both binaurally for the performer’s subsequent evaluation, readjusted to provide the most desirable acoustic feedback to th...


Journal of the Acoustical Society of America | 2016

Virtual acoustics in multimedia production—Beyond enhancing the acoustics of concert halls

Wieslaw Woszczyk

The paper describes a range of applications of virtual acoustics, the rendering of artificial acoustic spaces, which allow musicians to interact with ambient spaces created in real time. With a number of loudspeakers suitably distributed within a physical enclosure, such a projection system can be used to introduce a range of sound fields, which may effectively transform the acoustic environment to become a creative partner in multimedia production. A necessary component in this system is a low-latency, high-resolution multichannel convolution engine that converts a live audio signal into a structured ambient response, creating a scene in real time. Scenes can be changed to suit various goals of production and sonic narration. A number of techniques have been used to capture and modify impulse responses including temporal segmentation, shaping of magnitude envelope, noise reduction, spectral enrichment, time shifting and alignment, and parallel and sequential convolution. With these methods, artists may i...

Collaboration


Dive into the Wieslaw Woszczyk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonas Braasch

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Alain Berry

Université de Sherbrooke

View shared research outputs
Researchain Logo
Decentralizing Knowledge