Agnes Lisowska
University of Geneva
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Agnes Lisowska.
international conference on machine learning | 2004
Agnes Lisowska; Martin Rajman; Trung H. Bui
This paper describes a multimodal dialogue driven system, ARCHIVUS, that allows users to access and retrieve the content of recorded and annotated multimodal meetings. We describe (1) a novel approach taken in designing the system given the relative inapplicability of standard user requirements elicitation methodologies, (2) the components of ARCHIVUS, and (3) the methodologies that we plan to use to evaluate the system.
meeting of the association for computational linguistics | 2006
Marita Ailomaa; Miroslav Melichar; Agnes Lisowska; Martin Rajman; Susan Armstrong
This paper presents Archivus, a multi-modal language-enabled meeting browsing and retrieval system. The prototype is in an early stage of development, and we are currently exploring the role of natural language for interacting in this relatively unfamiliar and complex domain. We briefly describe the design and implementation status of the system, and then focus on how this system is used to elicit useful data for supporting hypotheses about multimodal interaction in the domain of meeting retrieval and for developing NLP modules for this specific domain.
human factors in computing systems | 2007
Agnes Lisowska; Susan Armstrong; Mireille Bétrancourt; Martin Rajman
In this paper we discuss the problems faced when trying to design an evaluation protocol for a multimodal system using novel input modalities and in a new domain. In particular, we focus on the problem of trying to minimize bias towards certain modalities and interaction patterns. Such bias might be introduced by experimenters in the instructions given to users which explain how the system can be used.
international conference on multimodal interfaces | 2004
Agnes Lisowska
This thesis will investigate which modalities, and in which combinations, are best suited for use in a multimodal interface that allows users to retrieve the content of recorded and processed multimodal meetings. The dual role of multimodality in the system (present in both the interface and the stored data) poses additional challenges. We will extend and adapt established approaches to HCI and multimodality [2, 3] to this new domain, maintaining a strongly user-driven approach to design.This thesis will investigate which modalities, and in which combinations, are best suited for use in a multimodal interface that allows users to retrieve the content of recorded and processed multimodal meetings. The dual role of multimodality in the system (present in both the interface and the stored data) poses additional challenges. We will extend and adapt established approaches to HCI and multimodality [2, 3] to this new domain, maintaining a strongly user-driven approach to design.
advances in mobile multimedia | 2009
Pascal Bruegger; Denis Lalanne; Agnes Lisowska; Béat Hirsbrunner
This paper proposes a new approach for modelling, testing and prototyping pervasive, possibly mobile, and distributed applications. It describes a set of tools aimed at supporting designers in the conceptualisation of their application and in the software development stage, and proposes a method for checking the validity of their design. The article also presents a pervasive application implemented and evaluated using our approach. It concludes with propositions for improvements in order to build a complete modelling, prototyping and testing framework for pervasive applications.
international conference on machine learning | 2006
Agnes Lisowska; Susan Armstrong
In this paper we discuss the results of user-based experiments to determine whether multimodal input to an interface for browsing and retrieving multimedia meetings gives users added value in their interactions. We focus on interaction with the Archivus interface using mouse, keyboard, voice and touchscreen input. We find that voice input in particular appears to give added value, especially when used in combination with more familiar modalities such as the mouse and keyboard. We conclude with a discussion of some of the contributing factors to these findings and directions for future work.
international conference on computational linguistics | 2008
Elisabeth Kron; Manny Rayner; Marianne Santaholma; Pierrette Bouillon; Agnes Lisowska
We present an overview of the development environment for Regulus, an Open Source platform for construction of grammar-based speech-enabled systems, focussing on recent work whose goal has been to introduce uniformity between text and speech views of Regulus-based applications. We argue the advantages of being able to switch quickly between text and speech modalities in interactive and offline testing, and describe how the new functionalities enable rapid prototyping of spoken dialogue systems and speech translators.
Lecture Notes in Computer Science | 2006
Jean Carletta; Simone Ashby; S. Bourban; Mike Flynn; Maël Guillemot; Thomas Hain; J. Kadlec; Vasilis Karaiskos; Wessel Kraaij; Melissa Kronenthal; Guillaume Lathoud; Mike Lincoln; Agnes Lisowska; L. McCowan; Wilfried Post; Dennis Reidsma; Pierre Wellner; Isbn; Issn; Reeks
Proceedings of Measuring Behavior 2005, 5th International Conference on Methods and Techniques in Behavioral Research | 2005
Iain A. McCowan; Jean Carletta; Wessel Kraaij; Simone Ashby; S. Bourban; Mike Flynn; Maël Guillemot; Thomas Hain; J. Kadlec; Vasilis Karaiskos; Melissa Kronenthal; Guillaume Lathoud; Mike Lincoln; Agnes Lisowska; Wilfried Post; Dennis Reidsma; Pierre Wellner; L. P. J. J. Noldus; F. Grieco; L. W. S. Loijens; P. H. Zimmerman
international conference on machine learning | 2005
Jean Carletta; Simone Ashby; Sebastien Bourban; Mike Flynn; Maël Guillemot; Thomas Hain; Jaroslav Kadlec; Vasilis Karaiskos; Wessel Kraaij; Melissa Kronenthal; Guillaume Lathoud; Mike Lincoln; Agnes Lisowska; Iain A. McCowan; Wilfried Post; Dennis Reidsma; Pierre Wellner