Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald Hindle is active.

Publication


Featured researches published by Donald Hindle.


international acm sigir conference on research and development in information retrieval | 1999

SCAN: designing and evaluating user interfaces to support retrieval from speech archives

Steve Whittaker; Julia Hirschberg; John Choi; Donald Hindle; Fernando Pereira; Amit Singhal

Previous examinations of search in textual archives have assumed that users first retrieve a ranked set of documents relevant to their query, and then visually scan through these documents, to identify the information they seek. While document scanning is possible in text, it is much more laborious in speech archives, due to the inherently serial nature of speech. Yet, in developing tools for speech access, little attention has so far been paid to users’ problems in scanning and extracting information from within “speech documents”. We demonstrate the extent of these problems in two user studies. We show that users experience severe problems with local navigation in extracting relevant information from within “speech documents”. Based on these results, we propose a new user interface (UI) design paradigm: What You See Is (Almost) What You Hear, (WYSIAWYH) a multimodal method for accessing speech archives. This paradigm presents a visual analogue to the underlying speech, enabling visual scanning for effective local navigation. We empirically evaluate a UI based on this paradigm. We compare our WYSIAWYH UI with a visual “tape recorder”, in relevance ranking, fact-finding, and summarization tasks involving broadcast news data. Our findings indicate that an interface supporting local navigation multimodally helps relevance ranking and fact-finding, but not summarization. We analyze the reasons for system success and identify outstanding research issues in UI design for speech archives.


international conference on acoustics speech and signal processing | 1998

Full expansion of context-dependent networks in large vocabulary speech recognition

Mehryar Mohri; Michael Riley; Donald Hindle; Andrej Ljolje; Fernando Pereira

We combine our earlier approach to context-dependent network representation with our algorithm for determining weighted networks to build optimized networks for large-vocabulary speech recognition combining an n-gram language model, a pronunciation dictionary and context-dependency modeling. While fully-expanded networks have been used before in restrictive settings (medium vocabulary or no cross-word contexts), we demonstrate that our network determination method makes it practical to use fully-expanded networks also in large-vocabulary recognition with full cross-word context modeling. For the DARPA North American Business News task (NAB), we give network sizes and recognition speeds and accuracies using bigram and trigram grammars with vocabulary sizes ranging from 10000 to 160000 words. With our construction, the fully-expanded NAB context-dependent networks contain only about twice as many arcs as the corresponding language models. Interestingly, we also find that, with these networks, real-time word accuracy is improved by increasing the vocabulary size and n-gram order.


human factors in computing systems | 1998

What can I say?: evaluating a spoken language interface to Email

Marilyn A. Walker; Jeanne Fromer; Giuseppe Di Fabbrizio; Craig Mestel; Donald Hindle

ABSlXACT This paper presents experimental results comparing two different designs for a spoken language interface to entail. We compare a mixed-initiative dialogue style, in which users can flexibly control the dialogue, to a systemiuitiative dialogue style, in which the system controls the dialogue. Our results show that even though the mixedinitiative system is more eflicient, as measured by number of turns, or elapsed time to complete a set of email tasks, users prefer the system-initiative interface. We posit that these preferences arise from the fact that the system initiative inter&e is easier to learn and more predictable.


international conference on human language technology research | 2001

SCANMail: audio navigation in the voicemail domain

Michiel Bacchiani; Julia Hirschberg; Aaron E. Rosenberg; Steve Whittaker; Donald Hindle; Philip L. Isenhour; Matt Jones; Litza A. Stark; Gary Zamchick

This paper describes SCANMail, a system that allows users to browse and search their voicemail messages by content through a GUI. Content based navigation is realized by use of automatic speech recognition, information retrieval, information extraction and human computer interaction technology. In addition to the browsing and querying functionalities, acoustics-based caller ID technology is used to proposes caller names from existing caller acoustic models trained from user feedback. The GUI browser also provides a note-taking capability. Comparing SCANMail to a regular voicemail interface in a user study, SCANMail performed better both in terms of objective (time to and quality of solutions) as well as subjective objectives.


conference of the international speech communication association | 1997

Evaluating competing agent strategies for a voice email agent.

Marilyn A. Walker; Donald Hindle; Jeanne Fromer; Giuseppe Di Fabbrizio; Craig Mestel


Archive | 2000

The AT&T LVCSR-2000 System

Andrej Ljolje; Donald Hindle; Michael Riley; Richard Sproat


conference of the international speech communication association | 2001

SCANMail: Browsing and Searching Speech Data by Content

Julia Hirschberg; Michiel Bacchiani; Donald Hindle; Philip L. Isenhour; Aaron E. Rosenberg; Litza A. Stark; Larry Stead; Steve Whittaker; Gary Zamchick


Archive | 1999

FINDING INFORMATION IN AUDIO: A NEW PARADIGM FOR AUDIO BROWSING AND RETRIEVAL

Julia Hirschberg; Steve Whittaker; Donald Hindle; Fernando Pereira; Amit Singhal


text retrieval conference | 1997

AT&T at TREC-6 : SDR track

Amit Singhal; John Choi; Donald Hindle; Fernando Pereira


Archive | 2007

Spoken Content-Based Audio Navigation (SCAN)

John Choi; Donald Hindle; Julia Hirschberg; Fernando Pereira; Amit Singhal; Steve Whittaker

Collaboration


Dive into the Donald Hindle's collaboration.

Researchain Logo
Decentralizing Knowledge