Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aren Jansen is active.

Publication


Featured researches published by Aren Jansen.


international conference on acoustics, speech, and signal processing | 2006

Intrinsic Fourier Analysis on the Manifold of Speech Sounds

Aren Jansen; Partha Niyogi

Recently, there has been much interest in geometrically motivated dimensionality reduction algorithms. These algorithms exploit low-dimensional manifold structure in certain natural datasets to reduce dimensionality while preserving categorical content. This paper has two goals: (i) to motivate the existence of a low-dimensional curved manifold structure to voiced speech sounds, and (ii) to present a new intrinsic (manifold-based) spectrogram technique founded on the existence this manifold structure. We find that the intrinsic representation allows phonetic distinction in fewer dimensions than required by a traditional spectrogram


ieee automatic speech recognition and understanding workshop | 2011

Efficient spoken term discovery using randomized algorithms

Aren Jansen; Benjamin Van Durme

Spoken term discovery is the task of automatically identifying words and phrases in speech data by searching for long repeated acoustic patterns. Initial solutions relied on exhaustive dynamic time warping-based searches across the entire similarity matrix, a method whose scalability is ultimately limited by the O(n2) nature of the search space. Recent strategies have attempted to improve search efficiency by using either unsupervised or mismatched-language acoustic models to reduce the complexity of the feature representation. Taking a completely different approach, this paper investigates the use of randomized algorithms that operate directly on the raw acoustic features to produce sparse approximate similarity matrices in O(n) space and O(n log n) time. We demonstrate these techniques facilitate spoken term discovery performance capable of outperforming a model-based strategy in the zero resource setting.


international conference on acoustics, speech, and signal processing | 2013

A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition

Aren Jansen; Emmanuel Dupoux; Sharon Goldwater; Mark Johnson; Sanjeev Khudanpur; Kenneth Church; Naomi H. Feldman; Hynek Hermansky; Florian Metze; Richard C. Rose; Michael L. Seltzer; Pascal Clark; Ian McGraw; Balakrishnan Varadarajan; Erin Bennett; Benjamin Börschinger; Justin Chiu; Ewan Dunbar; Abdellah Fourtassi; David F. Harwath; Chia-ying Lee; Keith Levin; Atta Norouzian; Vijayaditya Peddinti; Rachael Richardson; Thomas Schatz; Samuel Thomas

We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.


international conference on acoustics, speech, and signal processing | 2015

Unsupervised neural network based feature extraction using weak top-down constraints

Herman Kamper; Micha Elsner; Aren Jansen; Sharon Goldwater

Deep neural networks (DNNs) have become a standard component in supervised ASR, used in both data-driven feature extraction and acoustic modelling. Supervision is typically obtained from a forced alignment that provides phone class targets, requiring transcriptions and pronunciations. We propose a novel unsupervised DNN-based feature extractor that can be trained without these resources in zero-resource settings. Using unsupervised term discovery, we find pairs of isolated word examples of the same unknown type; these provide weak top-down supervision. For each pair, dynamic programming is used to align the feature frames of the two words. Matching frames are presented as input-output pairs to a deep autoencoder (AE) neural network. Using this AE as feature extractor in a word discrimination task, we achieve 64% relative improvement over a previous state-of-the-art system, 57% improvement relative to a bottom-up trained deep AE, and come to within 23% of a supervised system.


IEEE Transactions on Audio, Speech, and Language Processing | 2009

Point Process Models for Spotting Keywords in Continuous Speech

Aren Jansen; Partha Niyogi

We investigate the hypothesis that the linguistic content underlying human speech may be coded in the pattern of timings of various acoustic ldquoeventsrdquo (landmarks) in the speech signal. This hypothesis is supported by several strands of research in the fields of linguistics, speech perception, and neuroscience. In this paper, we put these scientific motivations to the test by formulating a point process-based computational framework for the task of spotting keywords in continuous speech. We find that even with a noisy and extremely sparse phonetic landmark-based point process representation, keywords can be spotted with accuracy levels comparable to recently studied hidden Markov model-based keyword spotting systems. We show that the performance of our keyword spotting system in the high-precision regime is better predicted by the median duration of the keyword rather than simply the number of its constituent syllables or phonemes. When we are confronted with very few (in the extreme case, zero) examples of the keyword in question, we find that constructing a keyword detector from its component syllable detectors provides a viable approach.


international conference on acoustics, speech, and signal processing | 2013

Weak top-down constraints for unsupervised acoustic model training

Aren Jansen; Samuel Thomas; Hynek Hermansky

Typical supervised acoustic model training relies on strong top-down constraints provided by dynamic programming alignment of the input observations to phonetic sequences derived from orthographic word transcripts and pronunciation dictionaries. This paper investigates a much weaker form of top-down supervision for use in place of transcripts and dictionaries in the zero resource setting. Our proposed constraints, which can be produced using recent spoken term discovery systems, come in the form of pairs of isolated word examples that share the same unknown type. For each pair, we perform a dynamic programming alignment of the acoustic observations of the two constituent examples, generating an inventory of cross-speaker frame pairs that each provide evidence that the same subword unit model should account for them. We find these weak top-down constraints are capable of improving model speaker independence by up to 57% relative over bottom-up training alone.


ieee automatic speech recognition and understanding workshop | 2013

Fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings

Keith Levin; Katharine Henry; Aren Jansen; Karen Livescu

Measures of acoustic similarity between words or other units are critical for segmental exemplar-based acoustic models, spoken term discovery, and query-by-example search. Dynamic time warping (DTW) alignment cost has been the most commonly used measure, but it has well-known inadequacies. Some recently proposed alternatives require large amounts of training data. In the interest of finding more efficient, accurate, and low-resource alternatives, we consider the problem of embedding speech segments of arbitrary length into fixed-dimensional spaces in which simple distances (such as cosine or Euclidean) serve as a proxy for linguistically meaningful (phonetic, lexical, etc.) dissimilarities. Such embeddings would enable efficient audio indexing and permit application of standard distance learning techniques to segmental acoustic modeling. In this paper, we explore several supervised and unsupervised approaches to this problem and evaluate them on an acoustic word discrimination task. We identify several embedding algorithms that match or improve upon the DTW baseline in low-resource settings.


international conference on acoustics, speech, and signal processing | 2015

Segmental acoustic indexing for zero resource keyword search

Keith Levin; Aren Jansen; Benjamin Van Durme

The task of zero resource query-by-example keyword search has received much attention in recent years as the speech technology needs of the developing world grow. These systems traditionally rely upon dynamic time warping (DTW) based retrieval algorithms with runtimes that are linear in the size of the search collection. As a result, their scalability substantially lags that of their supervised counterparts, which take advantage of efficient word-based indices. In this paper, we present a novel audio indexing approach called Segmental Randomized Acoustic Indexing and Logarithmic-time Search (S-RAILS). S-RAILS generalizes the original frame-based RAILS methodology to word-scale segments by exploiting a recently proposed acoustic segment embedding technique. By indexing word-scale segments directly, we avoid higher cost frame-based processing of RAILS while taking advantage of the improved lexical discrimination of the embeddings. Using the same conversational telephone speech benchmark, we demonstrate major improvements in both speed and accuracy over the original RAILS system.


spoken language technology workshop | 2014

A keyword search system using open source software

Jan Trmal; Guoguo Chen; Daniel Povey; Sanjeev Khudanpur; Pegah Ghahremani; Xiaohui Zhang; Vimal Manohar; Chunxi Liu; Aren Jansen; Dietrich Klakow; David Yarowsky; Florian Metze

Provides an overview of a speech-to-text (STT) and keyword search (KWS) system architecture build primarily on the top of the Kaldi toolkit and expands on a few highlights. The system was developed as a part of the research efforts of the Radical team while participating in the IARPA Babel program. Our aim was to develop a general system pipeline which could be easily and rapidly deployed in any language, independently on the language script and phonological and linguistic features of the language.


IEEE Transactions on Audio, Speech, and Language Processing | 2016

Unsupervised word segmentation and lexicon discovery using acoustic word embeddings

Herman Kamper; Aren Jansen; Sharon Goldwater

In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsupervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.

Collaboration


Dive into the Aren Jansen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Kintzley

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanuel Dupoux

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge