Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel M. Coffman.
Journal of the Acoustical Society of America | 2007
Daniel M. Coffman; Popani Gopalakrishnan; Ganesh N. Ramaswamy; Jan Kleindienst
A system and method of the present invention for determining and maintaining dialog focus in a conversational speech system includes presenting a command associated with an application to a dialog manager. The application associated with the command is unknown to the dialog manager at the time it is made. The dialog manager determines a current context of the command by reviewing a multi-modal history of events. At least one method is determined responsive to the command based on the current context. The at least one method is executed responsive to the command associated with the application.
international conference on mobile and ubiquitous systems: networking and services | 2008
Danny Soroker; Young Sang Paik; Yeo Song Moon; Scott McFaddin; Chandra Narayanaswami; HyunKi Jang; Daniel M. Coffman; MyungChul Lee; JongKwon Lee; Jinwoo Park
Searching and presenting rich data using mobile devices is hard given their inherent I/O limitations. One approach for alleviating these limitations is device symbiosis, whereby the interaction with ones personal mobile device is augmented by additionally engaging with more capable infrastructure devices, such as kiosks and displays. The Celadon framework, previously developed by our team, builds upon device symbiosis for delivering zone-based services through mobile and infrastructure devices in public spaces such as shopping malls, train stations and theme parks. An approach for rich data visualization that is gaining wide popularity is mashups. In this paper we describe User-Defined Mashups -- a general methodology that combines device symbiosis and automated creation of mashups. We have applied this methodology to build a system that enables Celadon users to flexibly interact with rich zone information through their mobile devices, leveraging large public displays. Our system bridges public and personal devices, data and services.
international world wide web conferences | 2010
Daniel M. Coffman; Danny Soroker; Chandrasekhar Narayanaswami; Aaron Zinman
As sophisticated enterprise applications move to the Web, some advanced user experiences become difficult to migrate due to prohibitively high computation, memory, and bandwidth requirements. State-dependent visualizations of large-scale data sets are particularly difficult since a change in the clients context necessitates a change in the displayed results. This paper describes a Web architecture where clients are served a session-specific image of the data, with this image divided into tiles dynamically generated by the server. This set of tiles is supplemented with a corpus of metadata describing the immediate vicinity of interest; additional metadata is delivered as needed in a progressive fashion in support and anticipation of the users actions. We discuss how the design of this architecture was motivated by the goal of delivering a highly responsive user experience. As an example of a complete application built upon this architecture, we present OrgMaps, an interactive system for navigating hierarchical data, enabling fluid, low-latency navigation of trees of hundreds of thousands of nodes on standard Web browsers using only HTML and JavaScript.
intelligent user interfaces | 1998
Candace L. Sidner; Daniel M. Coffman
First, the day of the GUI is drawing to a close. Second, many visionaries have argued that the new user interface will be a direct and delegate interface. But that’s wrong. The coming interface is one in which the user collaborates with the computer. The computer understands what the user is doing, can take part in those activities and is able to respond conversationally to the user’s activities. This requires an interface that not only understands the user’s individual utterances but also can participate in a conversation. Because conversations are fundamentally about the purposes for which people participate in the conversation, this new interface will also require that the machine understand and model purposes behind conversation. During this talk we will demonstrate new interfaces, some with speech, that participate with users in collaborations about doing email. We will use these demonstrations to illustrate how conversation and tasks can play a role in user interfaces. We will also demonstrate instances where spoken conversational interaction is more efficient than GUI interaction. Daniel M. Coffman IBM Speech Research T. J. Watson Research Center P.O. 218 Yorktown Heights, NY, 10598 [email protected]
Archive | 1999
Daniel M. Coffman; Liam David Comerford; Steven V. Degennaro; Edward A. Epstein; Ponani S. Gopalakrishnan; Stephane Herman Maes; David Nahamoo
Archive | 2008
Daniel M. Coffman; Herbert S. McFaddin; Chandrasekhar Narayanaswami; Danny Soroker
Archive | 2009
Daniel M. Coffman; Jonathan P. Munson; Chandrasekhar Narayanaswami; Danny Soroker; Jingtao Wang
Archive | 2001
Daniel M. Coffman; Rafah A. Hosn; Jan Kleindienst; Stephane Herman Maes; Thiruvilwamalai V. Raman
Archive | 2003
Daniel M. Coffman; Jan Kleindienst; Ganesh N. Ramaswamy
Archive | 1999
Jan Kleindienst; Ganesh N. Ramaswamy; Ponani S. Gopalakrishnan; Daniel M. Coffman