Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David F. Harwath is active.

Publication


Featured researches published by David F. Harwath.


international conference on acoustics, speech, and signal processing | 2013

A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition

Aren Jansen; Emmanuel Dupoux; Sharon Goldwater; Mark Johnson; Sanjeev Khudanpur; Kenneth Church; Naomi H. Feldman; Hynek Hermansky; Florian Metze; Richard C. Rose; Michael L. Seltzer; Pascal Clark; Ian McGraw; Balakrishnan Varadarajan; Erin Bennett; Benjamin Börschinger; Justin Chiu; Ewan Dunbar; Abdellah Fourtassi; David F. Harwath; Chia-ying Lee; Keith Levin; Atta Norouzian; Vijayaditya Peddinti; Rachael Richardson; Thomas Schatz; Samuel Thomas

We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.


ieee automatic speech recognition and understanding workshop | 2015

Deep multimodal semantic embeddings for speech and images

David F. Harwath; James R. Glass

In this paper, we present a model which takes as input a corpus of images with relevant spoken captions and finds a correspondence between the two modalities. We employ a pair of convolutional neural networks to model visual objects and speech signals at the word level, and tie the networks together with an embedding and alignment model which learns a joint semantic space over both modalities. We evaluate our model using image search and annotation tasks on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000 spoken captions using Amazon Mechanical Turk.


international conference on acoustics, speech, and signal processing | 2013

Zero resource spoken audio corpus analysis

David F. Harwath; Timothy J. Hazen; James R. Glass

Zero-resource speech processing involves the automatic analysis of a collection of speech data in a completely unsupervised fashion without the benefit of any transcriptions or annotations of the data. In this paper, our zero-resource system seeks to automatically discover important words, phrases and topical themes present in an audio corpus. This system employs a segmental dynamic time warping (S-DTW) algorithm for acoustic pattern discovery in conjunction with a probabilistic model which treats the topic and pseudo-word identity of each discovered pattern as hidden variables. By applying an Expectation-Maximization (EM) algorithm, our system estimates the latent probability distributions over the pseudo-words and topics associated with the discovered patterns. Using this information, we produce acoustic summaries of the dominant topical themes of the audio document collection.


international conference on acoustics, speech, and signal processing | 2012

Topic identification based extrinsic evaluation of summarization techniques applied to conversational speech

David F. Harwath; Timothy J. Hazen

Document summarization algorithms are most commonly evaluated according to the intrinsic quality of the summaries they produce. An alternate approach is to examine the extrinsic utility of a summary, measured by the ability of the summary to aid a human in the completion of a specific task. In this paper, we use topic identification as a proxy for relevancy determination in the context of an information retrieval task, and a summary is deemed effective if it enables a user to determine the topical content of a retrieved document. We utilize Amazons Mechanical Turk service to perform a large-scale human study contrasting four different summarization systems applied to conversational speech from the Fisher Corpus. We show that these results appear to be correlated with the performance of an automated topic identification system, and argue that this automated system can act as a low-cost proxy for a human evaluation during the development stages of a summarization system.


IEEE Transactions on Audio, Speech, and Language Processing | 2016

On the Use of Acoustic Unit Discovery for Language Recognition

Stephen Shum; David F. Harwath; Najim Dehak; James R. Glass

In this paper, we explore the use of large-scale acoustic unit discovery for language recognition. The deep neural network-based approaches that have achieved recent success in this task require transcribed speech and pronunciation dictionaries, which may be limited in availability and expensive to obtain. We aim to replace the need for such supervision via the unsupervised discovery of acoustic units. In this work, we present a parallelized version of a Bayesian nonparametric model from previous work and use it to learn acoustic units from a few hundred hours of multilingual data. These unit (or senone) sequences are then used as targets to train a deep neural network-based i-vector language recognition system. We find that a score-level fusion of our unsupervised system with an acoustic baseline can shrink the gap significantly between the baseline and a supervised benchmark system built using transcribed English. Subsequent experiments also show that an improved acoustic representation of the data can yield substantial performance gains and that language specificity is important for discovering meaningful acoustic units. We validate the generalizability of our proposed approach by presenting state-of-the-art results that exhibit similar trends on the NIST Language Recognition Evaluations from 2011 and 2015.


spoken language technology workshop | 2016

Look, listen, and decode: Multimodal speech recognition with images

Felix Sun; David F. Harwath; James R. Glass

In this paper, we introduce a multimodal speech recognition scenario, in which an image provides contextual information for a spoken caption to be decoded. We investigate a lattice rescoring algorithm that integrates information from the image at two different points: the image is used to augment the language model with the most likely words, and to rescore the top hypotheses using a word-level RNN. This rescoring mechanism decreases the word error rate by 3 absolute percentage points, compared to a baseline speech recognizer operating with only the speech recording.


european conference on computer vision | 2018

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

David F. Harwath; Adria Recasens; Dídac Surís; Galen Chuang; Antonio Torralba; James R. Glass

In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors.


neural information processing systems | 2016

Unsupervised Learning of Spoken Language with Visual Context

David F. Harwath; Antonio Torralba; James R. Glass


meeting of the association for computational linguistics | 2017

Learning Word-Like Units from Joint Audio-Visual Analylsis

David F. Harwath; James R. Glass


conference of the international speech communication association | 2014

Choosing Useful Word Alternates for Automatic Speech Recognition Correction Interfaces

David F. Harwath; Alexander Gruenstein; Ian McGraw

Collaboration


Dive into the David F. Harwath's collaboration.

Top Co-Authors

Avatar

James R. Glass

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ian McGraw

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Antonio Torralba

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chia-ying Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Florian Metze

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Galen Chuang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Justin Chiu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Timothy J. Hazen

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge