Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thiruvilwamalai V. Raman.
international conference on multimedia and expo | 2000
Stephane Herman Maes; Thiruvilwamalai V. Raman
The coming millennium will be characterized by the availability of multiple information appliances that make ubiquitous information access an accepted fact of life. The ability to access and transform information via a multiplicity of appliances, each designed to suit the users specific usage environment, requires the exploitation of all available input and output modalities to maximize the band-width of man-machine communication. There will be an increasingly strong demand for devices that present the same set of functionalities when accessing and manipulating the information, independently of the access device. The resulting uniform interface must be inherently multi-modal and dialog driven. This paper addresses the challenges of coordinated, synchronized multimodal user interaction that is inherent in designing user interfaces that work across these multiplicity of information appliances. Amongst the key issues to be addressed are the users ability to interact in parallel with the same information via a multiplicity of appliances and user interfaces, and the need to present a unified, synchronized view of information across the various appliances that the user deploys to interact with information. We achieve such synchronized interactions and views by adopting the well-known Model, View, Controller (MVC) design paradigm and adapting it to conversational interactions. The resulting conversational MVC (CMVC) is to be considered as the key underlying principle of any conversational multi-modal application.
component-based software engineering | 2005
Rahul P. Akolkar; Tanveer A. Faruquie; Juan M. Huerta; Pankaj Kankar; Nitendra Rajput; Thiruvilwamalai V. Raman; Raghavendra Udupa; Abhishek Verma
Voice application development requires specialized speech related skills besides the general programming ability. Encapsulating the speech specific behavior and complexities in prepackaged, configurable User Interface (UI) components will ease and expedite the voice application development. These components can be used across applications and are called as Reusable Dialog Components (RDCs). In this paper we propose a programming model and the framework for developing reusable dialog components. Our framework facilitates the development of voice applications via the encapsulation of interaction mechanisms, the encapsulation of best-of-breed practices (ie. grammars, prompts, and configuration parameters), a modular design and through pluggable dialog management strategies. The framework extends the standard J2EE/JSP based programming model to make it suitable for voice applications.
international conference on multimedia and expo | 2001
Rafah A. Hosn; Stephane Herman Maes; Thiruvilwamalai V. Raman
User interface is a mean to an end —its primary goal is to capture user intent and communicate the results of the requested computation. On today’s devices, user interaction can be achieved through a multiplicity of interaction modalities including speech and visual interfaces. As we evolve toward an increasingly connected world where we access and interact with applications through multiple devices, it becomes crucial that the various access paths to the underlying content be synchronized. This synchronization ensures that the user interacts with the same underlying content independent of the interaction modality — despite the difference in presentation that each modality might impose. It also ensures that the effect of user interaction in any given modality is reflected consistently across all available modalities. We describe an application framework that enables tightly synchronized multimodal user interaction. This framework derives its power from representing the application model in a modality-independent manner, and by traversing this model to produce the various synchronized multimodal views. As the user interaction proceeds, we maintain our current position in the model and update the application data as determined by user intent, then reflect these updates in the various views being presented. We conclude the paper by outlining an example that demonstrates this tightly synchronized multimodal interaction, and describe some of the future challenges in building such multimodal frameworks.
international conference on multimedia and expo | 2001
Stephane Herman Maes; Rafah A. Hosn; Jan Kleindienst; Tomáš Macek; Thiruvilwamalai V. Raman; L. Seredl
Modality: A particular type physical interface that can be perceived or interacted with by the user (e.g. voice interface, GUI display with keypad etc...) Multi-modal Browser: A browser that enables the user to interact with an application through different modes of intercation (e.g. typically: Voice and GUI). Accordingly a multi-modal-browser provides different moadlities for input and output Ideally it lets the user select at any time the modality that is the most appropriate to perform a particular interaction given this interaction and the users situation Thesis: By improving the user interface, we believe that multi-modal browsing will significantly accelerate the acceptance and growth of m-Commerce. Multiple access mechanisms One interaction mode per device PC Standardized rich visual interface Not suitable for mobile use I need a direct flight from New York to San Francisco after 7:30pm today There are five direct flights from New Yorks LaGuardia airport to San Francisco after 7:30pm today: Delta flight nnn...
CADUI | 2002
David Chamberlain; Angel Luis Diaz; Dan Gisolfi; Ravi B. Konuru; John M. Lucassen; Julie MacNaught; Stephane Herman Maes; Roland Albert Merrick; David Mundel; Thiruvilwamalai V. Raman; Shankar Ramaswamy; Thomas Schaeck; R. D. Thompson; Charles Wiecha
WSXL (Web Services Experience Language) is a web services centric component model for interactive web applications. WSXL is designed to achieve two main goals: enable businesses to distribute web applications through multiple revenue channels, and enable new services or applications to be created by leveraging existing applications across the Web. To accomplish these goals, WSXL components can be built out of three basic web service types for data, presentation, and control, the last of which is used to “wire together” the others using declarative language based on XLink and XML Events. WSXL also introduces a new description language for adapting services to new distribution channels. WSXL is built on widely accepted established and emerging open standards, and is designed to be independent of execution platform, browser, and presentation markup.
Archive | 2001
David Boloker; Rafah A. Hosn; Photina Jaeyun Jang; Jan Kleindienst; Tomáš Macek; Stephane Herman Maes; Thiruvilwamalai V. Raman; Ladislav Seredi
Archive | 2001
Jaroslav Gergic; Jan Kleindienst; Stephane Herman Maes; Thiruvilwamalai V. Raman; Jan Sedivy
Archive | 2001
Jaroslav Gergic; Rafah A. Hosn; Jan Kleindienst; Stephane Herman Maes; Thiruvilwamalai V. Raman; Jan Sedivy; Ladislav Seredi
Archive | 2001
Daniel M. Coffman; Rafah A. Hosn; Jan Kleindienst; Stephane Herman Maes; Thiruvilwamalai V. Raman
Archive | 2001
Sara H. Basson; Dimitri Kanevsky; Thiruvilwamalai V. Raman