Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Zöhrer is active.

Publication


Featured researches published by Matthias Zöhrer.


IEEE Transactions on Audio, Speech, and Language Processing | 2015

Representation learning for single-channel source separation and bandwidth extension

Matthias Zöhrer; Robert Peharz; Franz Pernkopf

In this paper, we use deep representation learning for model-based single-channel source separation (SCSS) and artificial bandwidth extension (ABE). Both tasks are ill-posed and source-specific prior knowledge is required. In addition to well-known generative models such as restricted Boltzmann machines and higher order contractive autoencoders two recently introduced deep models, namely generative stochastic networks (GSNs) and sum-product networks (SPNs), are used for learning spectrogram representations. For SCSS we evaluate the deep architectures on data of the 2 nd CHiME speech separation challenge and provide results for a speaker dependent, a speaker independent, a matched noise condition and an unmatched noise condition task. GSNs obtain the best PESQ and overall perceptual score on average in all four tasks. Similarly, frame-wise GSNs are able to reconstruct the missing frequency bands in ABE best, measured in frequency-domain segmental SNR. They outperform SPNs embedded in hidden Markov models and the other representation models significantly.


international conference on acoustics, speech, and signal processing | 2015

Representation models in single channel source separation

Matthias Zöhrer; Franz Pernkopf

Model-based single-channel source separation (SCSS) is an ill-posed problem requiring source-specific prior knowledge. In this paper, we use representation learning and compare general stochastic networks (GSNs), Gauss Bernoulli restricted Boltzmann machines (GBRBMs), conditional Gauss Bernoulli restricted Boltzmann machines (CGBRBMs), and higher order contractive autoencoders (HCAEs) for modeling the source-specific knowledge. In particular, these models learn a mapping from speech mixture spectrogram representations to single-source spectrogram representations, i.e. we apply them as filter for the speech mixture. In the test case, the individual source spectrograms of both models are inferred and the softmask for re-synthesis of the time signals is determined thereof. We evaluate the deep architectures on data of the 2nd CHiME speech separation challenge and provide results for a speaker dependent, a speaker independent, a matched noise condition and an unmatched noise condition task. Our experiments show the best PESQ and overall perceptual score on average for GSNs in all four tasks.


ieee automatic speech recognition and understanding workshop | 2015

Multi-channel speech processing architectures for noise robust speech recognition: 3rd CHiME challenge results

Lukas Pfeifenberger; Tobias Schrank; Matthias Zöhrer; Martin Hagmüller; Franz Pernkopf

Recognizing speech under noisy condition is an ill-posed problem. The CHiME 3 challenge targets robust speech recognition in realistic environments such as street, bus, caffee and pedestrian areas. We study variants of beamformers used for pre-processing multi-channel speech recordings. In particular, we investigate three variants of generalized side-lobe canceller (GSC) beamformers, i.e. GSC with sparse blocking matrix (BM), GSC with adaptive BM (ABM), and GSC with minimum variance distortionless response (MVDR) and ABM. Furthermore, we apply several post-filters to further enhance the speech signal. We introduce MaxPower postfilters and deep neural postfilters (DPFs). DPFs outperformed our baseline systems significantly when measuring the overall perceptual score (OPS) and the perceptual evaluation of speech quality (PESQ). In particular DPFs achieved an average relative improvement of 17.54% OPS points and 18.28% in PESQ, when compared to the CHiME 3 baseline. DPFs also achieved the best WER when combined with an ASR engine on simulated development and evaluation data, i.e. 8.98% and 10.82% WER. The proposed MaxPower beamformer achieved the best overall WER on CHiME 3 real development and evaluation data, i.e. 14.23% and 22.12%, respectively.


international conference on acoustics, speech, and signal processing | 2017

DNN-based speech mask estimation for eigenvector beamforming

Lukas Pfeifenberger; Matthias Zöhrer; Franz Pernkopf

In this paper, we present an optimal multi-channel Wiener filter, which consists of an eigenvector beamformer and a single-channel postfilter. We show that both components solely depend on a speech presence probability, which we learn using a deep neural network, consisting of a deep autoencoder and a softmax regression layer. To prevent the DNN from learning specific speaker and noise types, we do not use the signal energy as input feature, but rather the cosine distance between the dominant eigenvectors of consecutive frames of the power spectral density of the noisy speech signal. We compare our system against the BeamformIt toolkit, and state-of-the-art approaches such as the front-end of the best system of the CHiME3 challenge. We show that our system yields superior results, both in terms of perceptual speech quality and classification error.


conference of the international speech communication association | 2016

Manual versus Automated: The Challenging Routine of Infant Vocalisation Segmentation in Home Videos to Study Neuro(mal)development.

Florian B. Pokorny; Robert Peharz; Wolfgang Roth; Matthias Zöhrer; Franz Pernkopf; Peter B. Marschik; Björn W. Schuller

Copyright


neural information processing systems | 2014

General Stochastic Networks for Classification

Matthias Zöhrer; Franz Pernkopf


conference of the international speech communication association | 2014

Single channel source separation with general stochastic networks.

Matthias Zöhrer; Franz Pernkopf


conference of the international speech communication association | 2015

On representation learning for artificial bandwidth extension.

Matthias Zöhrer; Robert Peharz; Franz Pernkopf


conference of the international speech communication association | 2017

Virtual Adversarial Training and Data Augmentation for Acoustic Event Detection with Gated Recurrent Neural Networks.

Matthias Zöhrer; Franz Pernkopf


international conference on acoustics, speech, and signal processing | 2018

Resource Efficient Deep Eigenvector Beamforming.

Matthias Zöhrer; Lukas Pfeifenberger; Günther Schindler; Holger Fröning; Franz Pemkopf

Collaboration


Dive into the Matthias Zöhrer's collaboration.

Top Co-Authors

Avatar

Franz Pernkopf

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lukas Pfeifenberger

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Peharz

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Elmar Messner

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Hagmüller

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter B. Marschik

Medical University of Graz

View shared research outputs
Top Co-Authors

Avatar

Tobias Schrank

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Roth

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge