Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonas Braasch is active.

Publication


Featured researches published by Jonas Braasch.


Computer Music Journal | 2008

A loudspeaker-based projection technique for spatial music applications using virtual microphone control

Jonas Braasch; Nils Peters; Daniel L. Valente

This article describes a new sound-projection sys- tem for multichannel loudspeaker setups that has been developed by the authors. The system, called Virtual Microphone Control (ViMiC), is based on the simulation of microphone techniques and acoustic enclosures. In auditory virtual environments (AVEs), it is often required to position an anechoic point source in three-dimensional space. When sources in such applications are to be displayed using mul- tichannel loudspeaker reproduction systems, the processing is typically based upon simple amplitude- panning laws. With an adequate loudspeaker setup, this approach allows relatively accurate positioning of spatial images in the horizontal plane, but it lacks the flexibility many composers of computer music would like to have. This article describes an alternative approach based on an array of virtual mi- crophones. In the newly designed environment, the microphones, with adjustable directivity patterns and axis orientations, can be spatially placed as de- sired. Each virtual microphone signal is then fed to a separate (real) loudspeaker for sound projection. The system architecture was designed for a maximum flexibility in the creation of spatial imagery. Despite its flexibility, the system is intuitive to use because it is based on the geometrical and physical principles of microphone techniques. It is also consistent with the expectations of audio engineers to create sound imagery similar to that associated with standard sound-recording practice, but it goes beyond the original concept by allowing strategic violations of physically possible parameters; namely, new supernatural microphone directivity patterns can be implemented into the ViMiC system. This article begins with a review of various microphone techniques on which the ViMiC system Computer Music Journal, 32:3, pp. 55-71, Fall 2008 c � 2008 Massachusetts Institute of Technology. relies and alternative sound-projection techniques. Next, the fundamental physical concepts on which the ViMiC system is based are described. In the following section, software implementation of the system is outlined with a focus on strategies to keep processor load and system latency low. The article concludes with a description of several projects that involved the ViMiC system.


Ergonomics | 2010

Toward orthogonal non-individualised head-related transfer functions for forward and backward directional sound: cluster analysis and an experimental study

R.H.Y. So; B. Ngan; Andrew Horner; Jonas Braasch; Jens Blauert; K.L. Leung

Individualised head-related transfer functions (HRTFs) have been shown to accurately simulate forward and backward directional sounds. This study explores directional simulation for non-individualised HRTFs by determining orthogonal HRTFs for listeners to choose between. Using spectral features previously shown to aid forward–backward differentiation, 196 non-individualised HRTFs were clustered into six orthogonal groups and the centre HRTF of each group was selected as representative. An experiment with 15 listeners was conducted to evaluate the benefits of choosing between six centre-front and six centre-back directional sounds rather than the single front/back sounds produced by MIT-KEMAR HRTFs. Sound localisation error was significantly reduced by 22% and 65% of listeners reduced their front–back confusion rates. The significant reduction was maintained when the number of HRTFs was reduced from six to five. This represents a preliminary success in bridging the gap between individual and non-individual HRTFs for applications such as spatial surround sound systems. Statement of Relevance:Due to different pinna shapes, directional sound stimuli generated by non-individualised HRTFs suffer from serious front–back confusion. The reported work demonstrates a way to reduce front–back confusion for centre-back sounds generated from non-individualised HRTFs.


conference on automation science and engineering | 2014

Jamster: A mobile dual-arm assistive robot with Jamboxx control

Andrew Cunningham; William Keddy-Hector; Utkarsh Sinha; David Whalen; Daniel Kruse; Jonas Braasch; John T. Wen

This paper presents a mobile assistive robot that may be controlled by a mobility disabled user to perform a variety of tasks. Recent research in manipulation-based assistive robotics tends to focus on creating an autonomous robotic assistant. In contrast, our approach allows the user to manually control the robot assistant while providing guidance to the user task execution based on sensor measurements. Our implementation involves the integration of a dual-arm Baxter robot by Rethink Robotics mounted on a power wheelchair commanded by various inputs, including a sip-puff device called the Jamboxx. We call this system the Jamster. The Baxter is lightweight, can operate off the wheelchair battery through an inverter, and is safer to operate around human than traditional industrial robots. A graphical user interface with camera feedback and graphical rendition of the robot enables the user to drive the wheelchair and command the robot to perform simple tasks remotely. As an initial test of this system, we set up a simple task involving driving the Jamster to the shelf to pick up a peanut butter jar and transport it to another location - to be performed with Jamboxx only, without using ones hands. Such task is challenging if not impossible for severely mobility-impaired individuals. This work represents the first step towards our ultimate vision of a robotic assistant that could seamlessly work with mobility disabled individuals to effectively perform daily tasks and thus improve their quality of life.


Archive | 2013

A Binaural Model that Analyses Acoustic Spaces and Stereophonic Reproduction Systems by Utilizing Head Rotations

Jonas Braasch; Sam Clapp; A. Parks; T. Pastore; Ning Xiang

It is well known that head rotations are instrumental in resolving front/back confusions in human sound localization. A mechanism for a binaural model is proposed here to extend current cross-correlation models to compensate for head rotations. The algorithm tracks sound sources in the head-related coordinate system, HRCS, as well as in the room-related coordinate system, RRCS. It is also aware of the current head position within the room. The sounds are positioned in space using an HRTF catalog at \(1^{\circ }\) azimuthal resolution. The position of the sound source is determined through the interaural cross-correlation, ICC, functions across several auditory bands that are mapped to functions of azimuth and superposed. The maxima of the cross-correlation functions determine the position of the sound source. Unfortunately, two peaks usually occur, one at or near the correct location and the second at the front/back reversed position. When the model is programmed to virtually turn its head, the degree-based cross-correlation functions are shifted with the current head angle to match the RRCS. During this procedure, the ICC peak for the correct hemisphere will prevail if integrated over time for the duration of the head rotation, whereas the front/back reversed peak will average out.


Journal of the Acoustical Society of America | 2015

Tuning the cognitive environment: Sound masking with “natural” sounds in open-plan offices

Alana Gloria Deloach; Jeff P. Carter; Jonas Braasch

With the gain in popularity of open-plan office design and the engineering efforts to achieve acoustical comfort for building occupants, a majority of workers still report dissatisfaction in their workplace environment. Office acoustics influence organizational effectiveness, efficiency, and satisfaction through meeting appropriate requirements for speech privacy and ambient sound levels. Implementing a sound masking system is one tried-and-true method of achieving privacy goals. Although each sound masking system is tuned for its specific environment, the signal—random steady state electronic noise, has remained the same for decades. This session explores how “natural” sounds may be used as an alternative to this standard masking signal employed so ubiquitously in sound masking systems in the contemporary office environment. As an unobtrusive background sound, possessing the appropriate spectral characteristics, this proposed use of “natural” sounds for masking challenges the convention that masking soun...


Modern Acoustics and Signal Processing | 2013

An introduction to binaural processing

Ag Armin Kohlrausch; Jonas Braasch; Dorothea Kolossa; Jens Blauert

The binaural auditory system performs a number of astonishing functions, such as precise localization of sound sources, analysis of auditory scenes, segregation of auditory streams, providing situational awareness in reflective environments, suppression of reverberance, noise and coloration, enhancement of desired talkers over undesired ones, providing spatial impression and the sense of immersion. These functions are of profound interest for technological application and, hence, the subject of increasing engineering efforts. Generic application areas for binaural algorithms are, among others, aural virtual environments, hearing aids, assessment of product-sound quality, room acoustics, speech technology, audio technology, robotic ears, and tools for research into auditory physiology and aural perception. This introductory chapter starts with a discussion of the performance of binaural hearing and then lists relevant areas for technological application. After a short presentation of the physiological background, signal-processing algorithms as applied to binaural modeling are described. These signal-processing algorithms are manifold, but can be roughly divided into localization models and detection models. Both approaches are discussed in some detail. The chapter is meant to serve as an introduction to the main body of the book.


Journal of the Acoustical Society of America | 2013

A precedence effect model to simulate localization dominance using an adaptive, stimulus parameter-based inhibition process

Jonas Braasch

A number of precedence-effect models have been developed to simulate the robust localization performance of humans in reverberant conditions. Although they are able to reduce reverberant information for many conditions, they tend to fail for ongoing stimuli with truncated on/offsets, a condition human listeners master when localizing a sound source in the presence of a reflection, according to a study by Dizon and Colburn [J. Acoust. Soc. Am. 119, 2947-2964 (2006)]. This paper presents a solution for this condition by using an autocorrelation mechanism to estimate the delay and amplitude ratio between the leading and lagging signals. An inverse filter is then used to eliminate the lag signal, before it is localized with a standard localization algorithm. The current algorithm can operate on top of a basic model of the auditory periphery (gammatone filter bank, half-wave rectification) to simulate psychoacoustic data by Braasch et al. [Acoust. Sci. Tech. 24, 293-303 (2003)] and Dizon and Colburn. The model performs robustly with these on/offset truncated and interaural level difference based stimuli and is able to demonstrate the Haas effect.


Journal of the Acoustical Society of America | 2012

Echo thresholds for reflections from acoustically diffusive architectural surfaces

Philip W. Robinson; Andreas Walther; Christof Faller; Jonas Braasch

When sound reflects from an irregular architectural surface, it spreads spatially and temporally. Extensive research has been devoted to prediction and measurement of diffusion, but less has focused on its perceptual effects. This paper examines the effect of temporal diffusion on echo threshold. There are several notable differences between the waveform of a reflection identical to the direct sound and one from an architectural surface. The onset and offset are damped and the energy is spread in time; hence, the reflection response has a lower peak amplitude, and is decorrelated from the direct sound. The perceptual consequences of these differences are previously undocumented. Echo threshold tests are conducted with speech and music signals, using direct sound and a simulated reflection that is either identical to the direct sound or has various degrees of diffusion. Results indicate that for a speech signal, diffuse reflections are less easily detectable as a separate auditory event than specular reflections of the same total energy. For a music signal, no differences are observed between the echo thresholds for reflections with and without temporal diffusion. Additionally, echo thresholds are found to be shorter for speech than for music, and shorter for spatialized than for diotic presentation of signals.


international conference on digital signal processing | 2011

Binaural signal processing

Jens Blauert; Jonas Braasch

The human binaural system has a number of astonishing capabilities that are essential for the formation of our aural worlds. To mimic these, signal-processing models of the binaural system have been built and are constantly improved. In the following, the current state of the art of these models will be reviewed and their potential with respect to practical application will be discussed. Further, trends for future developments in these models will be considered. In this context, reference will be given to activities of AABBA, an open international circle of researchers with a special interest in the application of binaural models. Generic application areas for binaural models are, among others, aural virtual environments, hearing aids, assessment of product-sound quality, room acoustics, speech technology, audio technology, acoustic surveillance, robotic ears and tools for research into auditory physiology and perception.


Journal of New Music Research | 2013

Electro/Acoustic Improvisation and Deeply Listening Machines

Doug Van Nort; Pauline Oliveros; Jonas Braasch

In this paper we discuss our approach to designing improvising music systems whose intelligence is centred around careful listening, particularly to qualities related to timbre and texture. Our interest lies in systems that can make contextual decisions based on the overall character of the sound field, as well as the specific shape and contour created by each player. We describe the history and paradigm of ‘expanded instrument’ systems, which has led to one instrumental system (GREIS) focused on manual sculpting of sound with machine assistance, and one improvising system (FILTER) which introduces the ability to listen, recognize and transform a performer’s sound in a contextually relevant fashion. We describe the different modules of these improvising performance systems, as well as specific musical performances as examples of their use. We also describe our free improvisation trio, in order to describe the musical context that situates and informs our research.

Collaboration


Dive into the Jonas Braasch's collaboration.

Top Co-Authors

Avatar

Ning Xiang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Pauline Oliveros

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Torben Pastore

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Nikhil Deshpande

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Doug Van Nort

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge