Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith Levin is active.

Publication


Featured researches published by Keith Levin.


international conference on acoustics, speech, and signal processing | 2013

A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition

Aren Jansen; Emmanuel Dupoux; Sharon Goldwater; Mark Johnson; Sanjeev Khudanpur; Kenneth Church; Naomi H. Feldman; Hynek Hermansky; Florian Metze; Richard C. Rose; Michael L. Seltzer; Pascal Clark; Ian McGraw; Balakrishnan Varadarajan; Erin Bennett; Benjamin Börschinger; Justin Chiu; Ewan Dunbar; Abdellah Fourtassi; David F. Harwath; Chia-ying Lee; Keith Levin; Atta Norouzian; Vijayaditya Peddinti; Rachael Richardson; Thomas Schatz; Samuel Thomas

We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.


ieee automatic speech recognition and understanding workshop | 2013

Fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings

Keith Levin; Katharine Henry; Aren Jansen; Karen Livescu

Measures of acoustic similarity between words or other units are critical for segmental exemplar-based acoustic models, spoken term discovery, and query-by-example search. Dynamic time warping (DTW) alignment cost has been the most commonly used measure, but it has well-known inadequacies. Some recently proposed alternatives require large amounts of training data. In the interest of finding more efficient, accurate, and low-resource alternatives, we consider the problem of embedding speech segments of arbitrary length into fixed-dimensional spaces in which simple distances (such as cosine or Euclidean) serve as a proxy for linguistically meaningful (phonetic, lexical, etc.) dissimilarities. Such embeddings would enable efficient audio indexing and permit application of standard distance learning techniques to segmental acoustic modeling. In this paper, we explore several supervised and unsupervised approaches to this problem and evaluate them on an acoustic word discrimination task. We identify several embedding algorithms that match or improve upon the DTW baseline in low-resource settings.


international conference on acoustics, speech, and signal processing | 2015

Segmental acoustic indexing for zero resource keyword search

Keith Levin; Aren Jansen; Benjamin Van Durme

The task of zero resource query-by-example keyword search has received much attention in recent years as the speech technology needs of the developing world grow. These systems traditionally rely upon dynamic time warping (DTW) based retrieval algorithms with runtimes that are linear in the size of the search collection. As a result, their scalability substantially lags that of their supervised counterparts, which take advantage of efficient word-based indices. In this paper, we present a novel audio indexing approach called Segmental Randomized Acoustic Indexing and Logarithmic-time Search (S-RAILS). S-RAILS generalizes the original frame-based RAILS methodology to word-scale segments by exploiting a recently proposed acoustic segment embedding technique. By indexing word-scale segments directly, we avoid higher cost frame-based processing of RAILS while taking advantage of the improved lexical discrimination of the embeddings. Using the same conversational telephone speech benchmark, we demonstrate major improvements in both speed and accuracy over the original RAILS system.


IEEE Transactions on Signal Processing | 2017

Laplacian Eigenmaps From Sparse, Noisy Similarity Measurements

Keith Levin; Vince Lyzinski

Manifold learning and dimensionality reduction techniques are ubiquitous in science and engineering, but can be computationally expensive procedures when applied to large datasets or when similarities are expensive to compute. To date, little work has been done to investigate the tradeoff between computational resources and the quality of learned representations. We present both theoretical and experimental explorations of this question. In particular, we consider Laplacian eigenmaps embeddings based on a kernel matrix, and explore how the embeddings behave when this kernel matrix is corrupted by occlusion and noise. Our main theoretical result shows that under modest noise and occlusion assumptions, we can (with high probability) recover a good approximation to the Laplacian eigenmaps embedding based on the uncorrupted kernel matrix. Our results also show how regularization can aid this approximation. Experimentally, we explore the effects of noise and occlusion on Laplacian eigenmaps embeddings of two real-world datasets, one from speech processing and one from neuroscience, as well as a synthetic dataset.


Conference on Algorithms and Discrete Applied Mathematics | 2017

Accurate Low-Space Approximation of Metric k-Median for Insertion-Only Streams

Vladimir Braverman; Harry Lang; Keith Levin

We present a low-constant approximation for metric k-median on an insertion-only stream of n points using \(O(\epsilon ^{-3} k \log n)\) space. In particular, we present a streaming \((O(\epsilon ^{-3} k \log n), 2 + \epsilon )\)-bicriterion solution that reports cluster weights. It is well-known that running an offline algorithm on this bicriterion solution yields a \((17.66 + \epsilon )\)-approximation.


Journal of Machine Learning Research | 2016

On the consistency of the likelihood maximization vertex nomination scheme: bridging the gap between maximum likelihood estimation and graph matching

Vince Lyzinski; Keith Levin; Donniell E. Fishkind; Carey E. Priebe


conference of the international speech communication association | 2017

Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings.

Shane Settle; Keith Levin; Herman Kamper; Karen Livescu


symposium on discrete algorithms | 2016

Clustering problems on sliding windows

Vladimir Braverman; Harry Lang; Keith Levin; Morteza Monemizadeh


Journal of Machine Learning Research | 2018

Statistical Inference on Random Dot Product Graphs: a Survey

Avanti Athreya; Donniell E. Fishkind; Minh Tang; Carey E. Priebe; Youngser Park; Joshua T. Vogelstein; Keith Levin; Vince Lyzinski; Yichen Qin; Daniel L. Sussman


foundations of software technology and theoretical computer science | 2015

Clustering on Sliding Windows in Polylogarithmic Space

Vladimir Braverman; Harry Lang; Keith Levin; Morteza Monemizadeh

Collaboration


Dive into the Keith Levin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vince Lyzinski

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Harry Lang

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avanti Athreya

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minh Tang

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Clark

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge