Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joachim Thiemann is active.

Publication


Featured researches published by Joachim Thiemann.


EURASIP Journal on Advances in Signal Processing | 2016

Speech enhancement for multimicrophone binaural hearing aids aiming to preserve the spatial auditory scene

Joachim Thiemann; Menno Müller; Daniel Marquardt; Simon Doclo; Steven van de Par

Modern binaural hearing aids utilize multimicrophone speech enhancement algorithms to enhance signals in terms of signal-to-noise ratio, but they may distort the interaural cues that allow the user to localize sources, in particular, suppressed interfering sources or background noise. In this paper, we present a novel algorithm that enhances the target signal while aiming to maintain the correct spatial rendering of both the target signal as well as the background noise. We use a bimodal approach, where a signal-to-noise ratio (SNR) estimator controls a binary decision mask, switching between the output signals of a binaural minimum variance distortionless response (MVDR) beamformer and scaled reference microphone signals. We show that the proposed selective binaural beamformer (SBB) can enhance the target signal while maintaining the overall spatial rendering of the acoustic scene.


international workshop on machine learning for signal processing | 2013

An experimental comparison of source separation and beamforming techniques for microphone array signal enhancement

Joachim Thiemann; Emmanuel Vincent

We consider the problem of separating one or more speech signals from a noisy background. Although blind source separation (BSS) and beamforming techniques have both been exploited in this context, the former have typically been applied to small microphone arrays and the latter to larger arrays. In this paper, we provide an experimental comparison of some established beamforming and post-filtering techniques on the one hand and modern BSS techniques involving advanced spectral models on the other hand. We analyze the results as a function of the number of microphones, the number of speakers and the input Signal-to-Noise Ratio (iSNR) w.r.t. multichannel real-world environmental noise recordings. The results of the comparison show that, provided that a suitable post-filter or spectral model is chosen, beamforming performs similar to BSS on average in the single-speaker case while in the two-speaker case BSS exceeds beamformer performance. Crucially, this claim holds independently of the number of microphones.


european signal processing conference | 2015

Features for speaker localization in multichannel bilateral hearing aids

Joachim Thiemann; Simon Doclo; Steven van de Par

Modern hearing aids often contain multiple microphones to enable the use of spatial filtering techniques for signal enhancement. To steer the spatial filtering algorithm it is necessary to localize sources of interest, which can be intelligently achieved using computational auditory scene analysis (CASA). In this article, we describe a CASA system using a binaural auditory processing model that has been extended to six channels to allow reliable localization in both azimuth and elevation, thus also distinguishing between front and back. The features used to estimate the direction are one level difference and five inter-microphone time differences of arrival (TDOA). Initial experiments are presented that show the localization errors that can be expected with this set of features on a typical multichannel hearing aid in anechoic conditions with diffuse noise.


international conference on multimedia and expo | 2017

Real-time implementation of a GMM-based binaural localization algorithm on a VLIW-SIMD processor

Christopher Seifert; Joachim Thiemann; Lukas Gerlach; Tobias Volkmar; Guillermo Payá-Vayá; Holger Blume; Steven van de Par

Localization algorithms have become of considerable interest for robot audition, acoustic navigation, teleconferencing, speaker localization, and many other applications over the last decade. In this paper, we present a real-time implementation of a Gaussian mixture model (GMM) based probabilistic sound source localization algorithm for a low-power VLIW-SIMD processor for hearing devices. The algorithm has been proven to allow for robust localization of multiple sound sources simultaneously in reverberant and noisy environments. Real-time computation for audio frames of 512 samples at 16 kHz was achieved by introducing algorithmic optimizations and hardware customizations. To the best of our knowledge, this is the first real-time capable implementation of a computationally complex GMM-based sound source localization algorithm on a low-power processor. The resulting estimated core area without consideration of memory in 40nm low-power TSMC technology is 188,511 pm2.


international workshop on machine learning for signal processing | 2016

Speaker tracking for hearing aids

Joachim Thiemann; Jörg Lücke; Steven van de Par

Modern multi-microphone hearing aids employ spatial filtering algorithms capable of enhancing speakers from one direction whilst suppressing interfering speakers of other directions. In this context, it is useful to track moving speakers in the acoustic space by linking disjoint speech segments. Since the identity of the speakers is not known beforehand, the system must match short speech segments without having a specific speaker model or prior knowledge of the speech content, while ignoring changes in acoustic conditions. In this paper, we present a method that matches each speech segment to non-specific speaker models thereby obtaining an activation pattern, and then compares the patterns of disjoint speech segments to each other. The proposed method is low in computational complexity and memory footprint and uses mel-frequency cepstral coefficients (MFCCs) and Gaussian mixture models (GMMs). We find that, when using MFCCs as acoustic features, the proposed speaker tracking method is robust to changes in the acoustic environment provided that sufficiently large segments of speech are available.


international conference on electronics, circuits, and systems | 2016

Customized high performance low power processor for binaural speaker localization

N Behmann; Christopher Seifert; Guillermo Payá-Vayá; Holger Blume; Pekka Jääskeläinen; Joonas Multanen; Heikki Kultala; Jarmo Takala; Joachim Thiemann; S. van de Par

One of the key problems for hearing impaired persons represents the cocktail party scenario, in which a bilateral conversation is surrounded by other speakers and noise sources. State-of-the-art beamforming techniques are able to segregate specific sound sources from the environment, presupposing the position of the speaker. The speaker position can be estimated in the frontal azimuth-plane with a probabilistic localization algorithm from the binaural microphone input of the both-eared hearing aid system. However, the binaural speaker localization requires computationally complex audio processing and filtering. The high computational complexity combined with low energy requirements to meet the battery constraints of hearing aid devices presents an implementation challenge. This paper proposes a customized C programmable processor design to implement the speaker localization algorithm that fulfills the challenging requirements placed by the usage context. When compared to a VLIW-based processor design with similar basic computational resources and no special instructions, the proposed processor reaches a 151x speed-up. For a 28nm standard CMOS technology, power consumption of 12 mW (at 50 MHz) and silicon area of 0.3 mm2 is estimated. This is the first publication of a realistic programmable processing architecture for the probabilistic binaural speaker localization or a comparably complex algorithm for hearing aid devices. The algorithms supported by the previously proposed implementations are approximately 15x less computationally demanding.


european signal processing conference | 2014

A binaural hearing aid speech enhancement method maintaining spatial awareness for the user

Joachim Thiemann; Menno Müller; Steven van de Par


Archive | 2013

Sound source separation method

Gérald Kergourlay; Johann Citerin; Eric Nguyen; Lionel Le Scolan; Joachim Thiemann; Emmanuel Vincent; Nancy Bertin; Frédéric Bimbot


IEEE Conference Proceedings | 2016

両耳話者位置決めのためのカスタマイズされた高性能低電力プロセッサ【Powered by NICT】

N Behmann; Christopher Seifert; Guillermo Payá-Vayá; Holger Blume; Pekka Jääskeläinen; Joonas Multanen; Heikki Kultala; Jarmo Takala; Joachim Thiemann; S van de Par


40th Annual German Congress on Acoustics (DAGA 2014) | 2014

Spatial properties of the DEMAND noise recordings

Joachim Thiemann; Emmanuel Vincent; Steven van de Par

Collaboration


Dive into the Joachim Thiemann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Doclo

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heikki Kultala

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jarmo Takala

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Joonas Multanen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pekka Jääskeläinen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge