Vikram Aggarwal
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vikram Aggarwal.
intelligent user interfaces | 2005
Michelle X. Zhou; Zhen Wen; Vikram Aggarwal
To aid users in exploring large and complex data sets, we are building an intelligent multimedia conversation system. Given a user request, our system dynamically creates a multimedia response that is tailored to the interaction context. In this paper, we focus on the problem of media allocation, a process that assigns one or more media, such as graphics or speech, to best convey the intended response content. Specifically, we develop a graph-matching approach to media allocation, whose goal is to find a set of data-media mappings that maximizes the satisfaction of various allocation constraints (e.g., data-media compatibility and presentation consistency constraints). Compared to existing rule-based or plan-based approaches to media allocation, our work offers three unique contributions. First, we provide an extensible computational framework that optimizes media assignments by dynamically balancing all relevant constraints. Second, we use feature-based metrics to uniformly model various allocation constraints, including those cross-content and cross-media constraints, which often require special treatment in existing approaches. Third, we further improve the quality of a response by automatically detecting and repairing undesired allocation results. We have applied our approach to two different applications and our preliminary study has shown the promise of our work.
user interface software and technology | 2004
Michelle X. Zhou; Vikram Aggarwal
We are building a multimedia conversation system to facilitate information seeking in large and complex data spaces. To provide tailored responses to diverse user queries introduced during a conversation, we automate the generation of a system response. Here we focus on the problem of determining the data content of a response. Specifically, we develop an optimization-based approach to content selection. Compared to existing rule-based or plan-based approaches, our work offers three unique contributions. First, our approach provides a general framework that effectively addresses content selection for various interaction situations by balancing a comprehensive set of constraints (e.g., content quality and quantity constraints). Second, our method is easily extensible, since it uses feature-based metrics to systematically model selection constraints. Third, our method improves selection results by incorporating content organization and media allocation effects, which otherwise are treated separately. Preliminary studies show that our method can handle most of the user situations identified in a Wizard-of-Oz study, and achieves results similar to those produced by human designers.
intelligent user interfaces | 2007
Zhen Wen; Michelle X. Zhou; Vikram Aggarwal
We are building an intelligent information system to aid users in their investigative tasks, such as detecting fraud. In such a task, users must progressively search and analyze relevant information before drawing a conclusion. In this paper, we address how to help users find relevant informa-tion during an investigation. Specifically, we present a novel approach that can improve information retrieval by exploiting a users investigative context. Compared to existing retrieval systems, which are either context insensitive or leverage only limited user context, our work offers two unique contributions. First, our system works with users cooperatively to build an investigative context, which is otherwise very difficult to capture by machine or human alone. Second, we develop a context-aware method that can adaptively retrieve and evaluate information relevant to an ongoing investigation. Experiments show that our approach can improve the relevance of retrieved information significantly. As a result, users can fulfill their investigative tasks more efficiently and effectively.
ieee symposium on information visualization | 2005
Zhen Wen; Michelle X. Zhou; Vikram Aggarwal
We are building an intelligent multimodal conversation system to aid users in exploring large and complex data sets. To tailor to diverse user queries introduced during a conversation, we automate the generation of system responses, including both spoken and visual outputs. In this paper, we focus on the problem of visual context management, a process that dynamically updates an existing visual display to effectively incorporate new information requested by subsequent user queries. Specifically, we develop an optimization based approach to visual context management. Compared to existing approaches, which normally handle predictable visual context updates, our work offers two unique contributions. First, we provide a general computational framework that can effectively manage a visual context for diverse, unanticipated situations encountered in a user system conversation. Moreover, we optimize the satisfaction of both semantic and visual constraints, which otherwise are difficult to balance using simple heuristics. Second, we present an extensible representation model that uses feature based metrics to uniformly define all constraints. We have applied our work to two different applications and our evaluation has shown the promise of this work.
visual analytics science and technology | 2006
David Gotz; Michelle X. Zhou; Vikram Aggarwal
Archive | 2008
Vikram Aggarwal; Zhen Wen; Michelle X. Zhou
intelligent user interfaces | 2006
Michelle X. Zhou; Shimei Pan; James Shaw; Vikram Aggarwal; Zhen Wen
Archive | 2008
Vikram Aggarwal; Zhen Wen; Michelle X. Zhou
Archive | 2007
Vikram Aggarwal; Michelle X. Zhou
Archive | 2005
Vikram Aggarwal; Michelle X. Zhou; Zhen Wen