Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Schlüter is active.

Publication


Featured researches published by Jan Schlüter.


international conference on acoustics, speech, and signal processing | 2014

Improved musical onset detection with Convolutional Neural Networks

Jan Schlüter; Sebastian Böck

Musical onset detection is one of the most elementary tasks in music analysis, but still only solved imperfectly for polyphonic music signals. Interpreted as a computer vision problem in spectrograms, Convolutional Neural Networks (CNNs) seem to be an ideal fit. On a dataset of about 100 minutes of music with 26k annotated onsets, we show that CNNs outperform the previous state-of-the-art while requiring less manual preprocessing. Investigating their inner workings, we find two key advantages over hand-designed methods: Using separate detectors for percussive and harmonic onsets, and combining results from many minor variations of the same scheme. The results suggest that even for well-understood signal processing tasks, machine learning can be superior to knowledge engineering.


international conference on multimedia retrieval | 2013

A naive mid-level concept-based fusion approach to violence detection in Hollywood movies

Bogdan Ionescu; Jan Schlüter; Ionut Mironica; Markus Schedl

In this paper we approach the issue of violence detection in typical Hollywood productions. Given the high variability in appearance of violent scenes in movies, training a classifier to predict violent frames directly from visual or/and auditory features seems rather difficult. Instead, we propose a different perspective that relies on fusing mid-level concept predictions that are inferred from low-level features. This is achieved by employing a bank of multi-layer perceptron classifiers featuring a dropout training scheme. Experimental validation conducted in the context of the Violent Scenes Detection task of the MediaEval 2012 Multimedia Benchmark Evaluation show the potential of this approach that ranked first among 34 other submissions in terms of precision and F1-score.


acm multimedia | 2016

madmom: A New Python Audio and Music Signal Processing Library

Sebastian Böck; Filip Korzeniowski; Jan Schlüter; Florian Krebs; Gerhard Widmer

In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmoms concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.


european signal processing conference | 2015

Music boundary detection using neural networks on spectrograms and self-similarity lag matrices

Thomas Grill; Jan Schlüter

The first step of understanding the structure of a music piece is to segment it into formative parts. A recently successful method for finding segment boundaries employs a Convolutional Neural Network (CNN) trained on spectrogram excerpts. While setting a new state of the art, it often misses boundaries defined by non-local musical cues, such as segment repetitions. To account for this, we propose a refined variant of self-similarity lag matrices representing long-term relationships. We then demonstrate different ways of fusing this feature with spectrogram excerpts within a CNN, resulting in a boundary recognition performance superior to the previous state of the art. We assume that the integration of more features in a similar fashion would improve the performance even further.


european signal processing conference | 2017

Two convolutional neural networks for bird detection in audio signals

Thomas Grill; Jan Schlüter

We present and compare two approaches to detect the presence of bird calls in audio recordings using convolutional neural networks on mel spectrograms. In a signal processing challenge using environmental recordings from three very different sources, only two of them available for supervised training, we obtained an Area Under Curve (AUC) measure of 89% on the hidden test set, higher than any other contestant. By comparing multiple variations of our systems, we find that despite very different architectures, both approaches can be tuned to perform equally well. Further improvements will likely require a radically different approach to dealing with the discrepancy between data sources.


International Journal of Multimedia Information Retrieval | 2018

End-to-end cross-modality retrieval with CCA projections and pairwise ranking loss

Matthias Dorfer; Jan Schlüter; Andreu Vall; Filip Korzeniowski; Gerhard Widmer

Cross-modality retrieval encompasses retrieval tasks where the fetched items are of a different type than the search query, e.g., retrieving pictures relevant to a given text query. The state-of-the-art approach to cross-modality retrieval relies on learning a joint embedding space of the two modalities, where items from either modality are retrieved using nearest-neighbor search. In this work, we introduce a neural network layer based on canonical correlation analysis (CCA) that learns better embedding spaces by analytically computing projections that maximize correlation. In contrast to previous approaches, the CCA layer allows us to combine existing objectives for embedding space learning, such as pairwise ranking losses, with the optimal projections of CCA. We show the effectiveness of our approach for cross-modality retrieval on three different scenarios (text-to-image, audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a multi-view network using freely learned projections optimized by a pairwise ranking loss, especially when little training data is available (the code for all three methods is released at: https://github.com/CPJKU/cca_layer).


IEEE Transactions on Audio, Speech, and Language Processing | 2018

Online, Loudness-Invariant Vocal Detection in Mixed Music Signals

Bernhard Lehner; Jan Schlüter; Gerhard Widmer

Singing voice detection, also referred to as vocal detection (VD), aims at automatically identifying the regions in a music recording where at least one person sings. It is highly challenging due to the timbral and expressive richness of the human singing voice, as well as the practically endless variety of interfering instrumental accompaniment. Additionally, certain instruments have an inherent risk of being misclassified as vocals due to similarities of the sound production system. In this paper, we present a machine learning approach that is based on our previous work for VD, which is specifically designed to deal with those challenging conditions. The contribution of this paper is threefold: First, we present a new method for VD that passes a compact set of features to a long short-term memory recurrent neural network classifier that obtains state-of-the-art results. Second, we thoroughly evaluate the proposed method along with related approaches to really probe the weaknesses of the methods. In order to allow for such a thorough evaluation, we make a curated collection of datasets available to the research community. Finally, we focus on a specific problem that was not obvious and had not been discussed in the literature so far. The reason for this is precisely because limited evaluations had not revealed this as a problem: the lack of loudness invariance. We will discuss the implications of utilizing loudness-related features and show that our method successfully deals with this problem due to the specific set of features it uses.


Archive | 2013

Roadmap for Music Information ReSearch

Xavier Serra; Michela Magas; Emmanouil Benetos; Magdalena Chudy; Simon Dixon; Arthur Flexer; Emilia Gómez; Fabien Gouyon; Perfecto Herrera; Sergi Jordà; Oscar Paytuvi; Geoffroy Peeters; Jan Schlüter; Hugues Vinet; Gerhard Widmer


international symposium/conference on music information retrieval | 2014

BOUNDARY DETECTION IN MUSIC STRUCTURE ANALYSIS USING CONVOLUTIONAL NEURAL NETWORKS

Karen Ullrich; Jan Schlüter; Thomas Grill


international symposium/conference on music information retrieval | 2015

Exploring Data Augmentation for Improved Singing Voice Detection with Neural Networks.

Jan Schlüter; Thomas Grill

Collaboration


Dive into the Jan Schlüter's collaboration.

Top Co-Authors

Avatar

Gerhard Widmer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Markus Schedl

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Thomas Grill

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Bogdan Ionescu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Ionut Mironica

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Arthur Flexer

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Filip Korzeniowski

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Matthias Dorfer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Sebastian Böck

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Andreu Vall

Johannes Kepler University of Linz

View shared research outputs
Researchain Logo
Decentralizing Knowledge