Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David G. Novick is active.

Publication


Featured researches published by David G. Novick.


IEEE Transactions on Speech and Audio Processing | 1995

The challenge of spoken language systems: Research directions for the nineties

Ron Cole; L. Hirschman; L. Atlas; M. Beckman; Alan W. Biermann; M. Bush; Mark A. Clements; L. Cohen; Oscar N. Garcia; B. Hanson; Hynek Hermansky; S. Levinson; Kathleen R. McKeown; Nelson Morgan; David G. Novick; Mari Ostendorf; Sharon L. Oviatt; Patti Price; Harvey F. Silverman; J. Spiitz; Alex Waibel; Cliff Weinstein; Stephen A. Zahorian; Victor W. Zue

A spoken language system combines speech recognition, natural language processing and human interface technology. It functions by recognizing the persons words, interpreting the sequence of words to obtain a meaning in terms of the application, and providing an appropriate response back to the user. Potential applications of spoken language systems range from simple tasks, such as retrieving information from an existing database (traffic reports, airline schedules), to interactive problem solving tasks involving complex planning and reasoning (travel planning, traffic routing), to support for multilingual interactions. We examine eight key areas in which basic research is needed to produce spoken language systems: (1) robust speech recognition; (2) automatic training and adaptation; (3) spontaneous speech; (4) dialogue models; (5) natural language response generation; (6) speech synthesis and speech generation; (7) multilingual systems; and (8) interactive multimodal systems. In each area, we identify key research challenges, the infrastructure needed to support research, and the expected benefits. We conclude by reviewing the need for multidisciplinary research, for development of shared corpora and related resources, for computational support and far rapid communication among researchers. The successful development of this technology will increase accessibility of computers to a wide range of users, will facilitate multinational communication and trade, and will create new research specialties and jobs in this rapidly expanding area. >


international conference on design of communication | 2007

Usability inspection methods after 15 years of research and practice

Tasha Hollingsed; David G. Novick

Usability inspection methods, such as heuristic evaluation, the cognitive walkthrough, formal usability inspections, and the pluralistic usability walkthrough, were introduced fifteen years ago. Since then, these methods, analyses of their comparative effectiveness, and their use have evolved in different ways. In this paper, we track the fortunes of the methods and analyses, looking at which led to use and to further research, and which led to relative methodological dead ends. Heuristic evaluation and the cognitive walkthrough appear to be the most actively used and researched techniques. The pluralistic walkthrough remains a recognized technique, although not the subject of significant further study. Formal usability inspections appear to have been incorporated into other techniques or largely abandoned in practice. We conclude with lessons for practitioners and suggestions for future research.


international conference on spoken language processing | 1996

Coordinating turn-taking with gaze

David G. Novick; Brian Hansen; Karen Ward

Explores the role of gaze in coordinating turn-taking in mixed-initiative conversation, and specifically how gaze indicators might be usefully modeled in computational dialogue systems. We analyzed about 20 minutes of videotape of eight dialogues by four pairs of subjects performing a simple face-to-face cooperative laboratory task. We extend previous studies by explicating gaze patterns in face-to-face conversations, formalizing the most frequent pattern as a computational model of turn-taking and testing the model through an agent-based simulation. Prior conversation simulations of conversational control acts relied on abstract speech-act representations of control. This study advances the computational account of dialogue through simulation of direct physical expression of gaze to coordinate conversational turns.


intelligent virtual agents | 2007

A Computational Model of Culture-Specific Conversational Behavior

Dusan Jan; David Herrera; Bilyana Martinovski; David G. Novick; David R. Traum

This paper presents a model for simulating cultural differences in the conversational behavior of virtual agents. The model provides parameters for differences in proxemics, gaze and overlap in turn taking. We present a review of literature on these factors and show results of a study where native speakers of North American English, Mexican Spanish and Arabic were asked to rate the realism of the simulations generated based on different cultural parameters with respect to their culture.


international conference on design of communication | 2006

Why don't people read the manual?

David G. Novick; Karen Ward

Few users of computer applications seek help from the documentation. This paper reports the results of an empirical study of why this is so and examines how, in real work, users solve their usability problems. Based on in-depth interviews with 25 subjects representing a varied cross-section of users, we find that users do avoid using both paper and online help systems. Few users have paper manuals for the most heavily used applications, but none complained about their lack. Online help is more likely to be consulted than paper manuals, but users are equally likely to report that they solve their problem by asking a colleague or experimenting on their own. Users cite difficulties in navigating the help systems, particularly difficulties in finding useful search terms, and disappointment in the level of explanation found.


international conference on spoken language processing | 1996

Building 10,000 spoken dialogue systems

Stephen Sutton; David G. Novick; Ron Cole; Pieter Vermeulen; J.H de Villiers; Johan Schalkwyk; Mark A. Fanty

Spoken dialogue systems are not yet ubiquitous. But with an easy enough development tool, at a low enough cost, and on portable enough software, advances in spoken dialogue technology could soon enable the rapid development of 10000 or more spoken dialogue systems for a wide variety of applications. To achieve this goal, we propose a toolkit approach for research and development of spoken dialogue systems. The paper presents the CSLU toolkit which integrates spoken dialogue technology with an easy to use interface. The toolkit supports rapid prototyping, iterative design, empirical evaluation, training of specialized speech recognizers and tools for conducting research to improve the underlying technology. We describe the toolkit with an emphasis on graphical creation of spoken dialogue systems; the transition of the toolkit into the user community; and research directed toward improvements in the toolkit.


Speech Communication | 1997

Experiments with a spoken dialogue system for taking the US census

Ron Cole; David G. Novick; Pieter Vermeulen; Stephen Sutton; Mark A. Fanty; L.F.A Wessels; J.H de Villiers; Johan Schalkwyk; Brian Hansen; D Burnett

Abstract This paper reports the results of the development, deployment and testing of a large spoken-language dialogue application for use by the general public. We built an automated spoken questionnaire for the US Bureau of the Census. In the projects first phase, the basic recognizers and dialogue system were developed using 4000 calls. In the second phase, the system was adapted to meet Census Bureau requirements and deployed in the Bureaus 1995 national test of new technologies. In the third phase, we refined the system and showed empirically that an automated spoken questionnaire could successfully collect and recognize census data, and that subjects preferred the spoken system to written questionnaires. Our large data collection effort and two subsequent field tests showed that, when questions are asked correctly, the answers contain information within the desired response categories about 99% of the time.


Information Technology & People | 1995

Conversational effectiveness in multimedia communications

Catherine R. Marshall; David G. Novick

Oregon Graduate Institute reports a laboratory experiment that compared three different communications modalities (face‐to‐face, audio‐only, and audio and video) across two co‐operative tasks, which can be characterized as visual and non‐visual. In each task, effectiveness varied as a significant function of modality. However, the directions of these functions were opposite. That is, for the visual task conversants were more effective in the face‐to‐face and audio and video modalities than in the audio‐only modality; for the non‐visual task, conversants were more effective in the audio‐only modality than in the face‐to‐face modality. Additional analysis of the non‐visual tasks suggests that modality affects the extent to which asymmetry of knowledge results in asymmetry of influence between conversants.


meeting of the association for computational linguistics | 1994

AN EMPIRICAL MODEL OF ACKNOWLEDGMENT FOR SPOKEN-LANGUAGE SYSTEMS

David G. Novick; Stephen Sutton

We refine and extend prior views of the description, purposes, and contexts-of-use of acknowledgment acts through empirical examination of the use of acknowledgments in task-based conversation. We distinguish three broad classes of acknowledgments (other←ackn, self←other←ackn, and self+ackn) and present a catalogue of 13 patterns within these classes that account for the specific uses of acknowledgment in the corpus.


international conference on design of communication | 2003

Hands-free documentation

Karen Ward; David G. Novick

In this paper, we introduce an analysis of the requirements and design choices for hands-free documentation. Hands-busy tasks such as cooking or car repair may require substantial interruption of the task: moving the pan off the burner and wiping hands, or crawling out from underneath the car. We review the need for hands-free documentation and explore the role of task in the use of documentation. Our central analysis examines the roles and characteristics of input and output modalities of hands-free documentation. In particular, we review the use of speech as an input modality, and then visual means and speech as possible output modalities. Finally, we discuss the implications of our analysis for the design of hands-free documentation and suggest future work. The design implications include issues of navigating through the documentation, determining the users task and task-step, establishing mutual understanding of the state of the task, and determining when to start conveying information to the user.

Collaboration


Dive into the David G. Novick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iván Gris

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriana Camacho

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Alex Rayon

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Diego A. Rivera

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nigel Ward

Association for Computing Machinery

View shared research outputs
Researchain Logo
Decentralizing Knowledge