Laurence Nigay
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laurence Nigay.
human factors in computing systems | 1993
Laurence Nigay; Joëlle Coutaz
Multimodal interaction enables the user to employ different modalities such as voice, gesture and typing for communicating with a computer. This paper presents an analysis of the integration of multiple communication modalities within an interactive system. To do so, a software engineering perspective is adopted. First, the notion of “multimodal system” is clarified. We aim at proving that two main features of a multimodal system are the concurrency of processing and the fusion of input/output data. On the basis of these two features, we then propose a design space and a method for classifying multimodal systems. In the last section, we present a software architecture model of multimodal systems which supports these two salient properties: concurrency of processing and data fusion. Two multimodal systems developed in our team, VoicePaint and NoteBook, are used to illustrate the discussion.
international conference on human-computer interaction | 1995
Joëlle Coutaz; Laurence Nigay; Daniel Salber; Ann Blandford; Jon May; Richard M. Young
We propose the CARE properties as a simple way of characterisin g and assessing aspects of multimodal interaction: the Complementarity, Assignment, Redundancy, and Equivalence that may occur between the interaction techniques available in a multimodal user interface. We provide a formal definition of these properties and use the notion of compatibility to show how the system CARE properties interact with user CARE-like properties in the design of a system. The discussion is illustrated with MATIS, a Multimodal Air Travel Information System.
human factors in computing systems | 1995
Laurence Nigay; Joëlle Coutaz
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech and direct manipulation. The flexibility they offer results in an increased complexity that current software tools do not address appropriately. One of the emerging technical problems in multimodal interaction is concerned with the fusion of information produced through distinct interaction techniques. In this article, we present a generic fusion engine that can be embedded in a multi-agent architecture modelling technique. We demonstrate the fruitful symbiosis of our fusion mechanism with PAC-Amodeus, our agentbased conceptual model, and illustrate the applicability of the approach with the implementation of an effective interactive system: MATIS, a Multimodal Airline Travel Information System.
human factors in computing systems | 1997
Gaëlle Calvary; Joëlle Coutaz; Laurence Nigay
This article nqmts our tiection on softsvam amhiteetm modelling for multi-user systems (or groupware).Fi~ w introduce t.k notion of software architecture and make explieit the design steps that most sofl-wam designm in HCI tend to blend in a fizzy way. Building on general concepts and pmetiee fmm main stream soikm engineering, we then present a comparative analysis of the most signifhnt architecture models developed for singleand multi-user systems. We close with the pmentation d PAC*, a new arehitectuml fmmwodc for modelling and designing the sdsvare amhitectm of multi-user systems. PAC* is a motivated combination of existing anAiteetuml models seleeted for the complementarily of their “good properties”. These include operational heuristics such as rules for deriving agents in accordance to the task model or criteria for reasoning about replication as well as properties such as support for style heterogeneity, portability, and reusability.
conference on computer supported cooperative work | 2002
Yann Laurillau; Laurence Nigay
In this paper we present the Clover architectural model, a new conceptual architectural model for groupware. Our model results from the combination of the layer approach of Dewans generic architecture with the functional decomposition of the Clover design model. The Clover design model defines three classes of services that a groupware application may support, namely, production, communication and coordination services. The three classes of services can be found in each functional layer of our model. Our model is illustrated with a working system, the CoVitesse system, its software being organized according to our Clover architectural model.
Archive | 2001
Murray Reed Little; Laurence Nigay
Nowadays, UML is the most successful model-based approach to supporting software development. However, during the evolution of UML little attention has been paid to supporting user interface design and development. In the meantime, the user interface has become a crucial part of most software projects, and the use of models to capture requirements and express solutions for its design, a true necessity. Within the community of researchers investigating model-based approaches for interactive applications, particular attention has been paid to task models. ConcurTaskTrees is one of the most widely used notations for task modelling. This paper discusses a solution for obtaining a UML for interactive systems based on the integration of the two approaches and why this is a desirable goal.
international conference on multimodal interfaces | 2004
Jullien Bouchet; Laurence Nigay; Thierry Ganille
Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal interfaces by describing a component-based approach, called ICARE, for rapidly developing multimodal interfaces. ICARE stands for Interaction-CARE (Complementarity Assignment Redundancy Equivalence). Our component-based approach relies on two types of software components. Firstly ICARE elementary components include Device components and Interaction Language components that enable us to develop pure modalities. The second type of components, called Composition components, define combined usages of modalities. Reusing and assembling ICARE components enable rapid development of multimodal interfaces. We have developed several multimodal systems using ICARE and we illustrate the discussion using one of them: the FACET simulator of the Rafale French military plane cockpit.
human factors in computing systems | 2004
Jullien Bouchet; Laurence Nigay
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech, gesture and eye gaze tracking. The flexibility they offer results in an increased complexity that current software development tools do not address appropriately. In this paper we describe a component-based approach, called ICARE, for specifying and developing multimodal interfaces. Our approach relies on two types of components: (i) elementary components that describe pure modalities and (ii) composition components (Complementarity, Redundancy and Equivalence) that enable the designer to specify combined usage of modalities. The designer graphically assembles the ICARE components and the code of the multimodal user interface is automatically generated. Although the ICARE platform is not fully developed, we illustrate the applicability of the approach with the implementation of two multimodal systems: MEMO a GeoNote system and MID, a multimodal identification interface.
human-computer interaction with mobile devices and services | 2002
Emmanuel Dubois; Philip D. Gray; Laurence Nigay
In this paper we present a notation, ASUR++, for describing mobile systems that combine physical and digital entities. The notation ASUR++ builds upon our previous one, called ASUR. The new features of ASUR++ are dedicated to handling the mobility of users and enable a designer to express physical relationships among entities involved in the system. The notation and its usefulness are illustrated in the context of the design of an augmented museum gallery.
Lecture Notes in Computer Science | 2000
Frédéric Vernier; Laurence Nigay
This article proposes a framework that will help analyze current and future output multimodal user interfaces. We first define an output multimodal system. We then present our framework that identifies several different combinations of modalities and their characteristics. This framework assists in the selection of the most appropriate modalities for achieving efficient multimodal presentations. The discussion is illustrated with MulTab (Multimodal Table), an output multimodal system for managing large tables of numerical data.