Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Yves Lionel Lawson is active.

Publication


Featured researches published by Jean-Yves Lionel Lawson.


engineering interactive computing system | 2009

An open source workbench for prototyping multimodal interactions based on off-the-shelf heterogeneous components

Jean-Yves Lionel Lawson; Ahmad-Amr Al-Akkad; Jean Vanderdonckt; Benoît Macq

In this paper we present an extensible software workbench for supporting the effective and dynamic prototyping of multimodal interactive systems. We hypothesize the construction of such applications to be based on the assembly of several components, namely various and sometimes interchangeable modalities at the input, fusion-fission components, and also several modalities at the output. Successful realization of advanced interactions can benefit from early prototyping and the iterative implementation of design requires the easy integration, combination, replacement, or upgrade of components. We have designed and implemented a thin integration platform able to manage these key elements, and thus provide the research community a tool to bridge the gap of the current support for multimodal applications implementation. The platform is included within a workbench offering visual editors, non-intrusive tools, components and techniques to assemble various modalities provided in different implementation technologies, while keeping a high level of performance of the integrated system.


human factors in computing systems | 2008

The openinterface framework: a tool for multimodal interaction.

Marcos Serrano; Laurence Nigay; Jean-Yves Lionel Lawson; Andrew Ramsay; Roderick Murray-Smith; Sebastian Denef

The area of multimodal interaction has expanded rapidly. However, the implementation of multimodal systems still remains a difficult task. Addressing this problem, we describe the OpenInterface (OI) framework, a component-based tool for rapidly developing multimodal input interfaces. The OI underlying conceptual component model includes both generic and tailored components. In addition, to enable the rapid exploration of the multimodal design space for a given system, we need to capitalize on past experiences and include a large set of multimodal interaction techniques, their specifications and documentations. In this work-in-progress report, we present the current state of the OI framework and the two exploratory test-beds developed using the OpenInterface Interaction Development Environment.


Journal on Multimodal User Interfaces | 2007

Multimodal Signal Processing and Interaction for a Driving Simulator: Component-based Architecture

Alexandre Benoit; Laurent Bonnaud; Alice Caplier; Frédéric Jourde; Laurence Nigay; Marcos Serrano; Ioannis G. Damousis; Dimitrios Tzovaras; Jean-Yves Lionel Lawson

In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver’s focus of attention detection as well as driver’s fatigue state detection and prediction. Capturing and interpreting the driver’s focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying its software architecture based on multimodal signal processing and multimodal interaction components considering two software platforms, OpenInterface and ICARE.


international conference on information technology: new generations | 2009

User-Centered Design and Fast Prototyping of an Ambient Assisted Living System for Elderly People

Suzanne Kieffer; Jean-Yves Lionel Lawson; Benoît Macq

This paper presents the Keep-In-Touch project, which aims at developing an integrated Ambient Assisted Living (AAL) solution assisting and monitoring elderly people in their daily-life activities, supporting personal autonomy and well-being, and maintaining social cohesion. The focus of this paper is the integration of interactive user modeling, the benefit of the combination of user-centered development method and fast prototyping implementation in order to develop a solution which fits the end-user. The key elements to achieve this goal are: multimodality, accessibility, adaptability to user profile and changes, and usability. We will show how our approach addresses these needs.


international conference on multimodal interfaces | 2009

A fusion framework for multimodal interactive applications

Hildeberto Mendonça; Jean-Yves Lionel Lawson; Olga Vybornova; Benoît Macq; Jean Vanderdonckt

This research aims to propose a multi-modal fusion framework for high-level data fusion between two or more modalities. It takes as input low level features extracted from different system devices, analyses and identifies intrinsic meanings in these data. Extracted meanings are mutually compared to identify complementarities, ambiguities and inconsistencies to better understand the user intention when interacting with the system. The whole fusion life cycle will be described and evaluated in an office environment scenario, where two co-workers interact by voice and movements, which might show their intentions. The fusion in this case is focusing on combining modalities for capturing a context to enhance the user experience.


international conference on multimodal interfaces | 2010

Component-based high fidelity interactive prototyping of post-WIMP interactions

Jean-Yves Lionel Lawson; Mathieu Coterot; Cyril Carincotte; Benoît Macq

In order to support interactive high-fidelity prototyping of post-WIMP user interactions, we propose a multi-fidelity design method based on a unifying component-based model and supported by an advanced tool suite, the OpenInterface Platform Workbench. Our approach strives for supporting a collaborative (programmer-designer) and user-centered design activity. The workbench architecture allows exploration of novel interaction techniques through seamless integration and adaptation of heterogeneous components, high-fidelity rapid prototyping, runtime evaluation and fine-tuning of designed systems. This paper illustrates through the iterative construction of a running example how OpenInterface allows the leverage of existing resources and fosters the creation of non-conventional interaction techniques.


international conference on human computer interaction | 2011

A framework to develop VR interaction techniques based on openinterface and AFreeCA

Diego Martínez; Jean-Yves Lionel Lawson; José Pascual Molina; Arturo S. García; Pascual González; Jean Vanderdonckt; Benoît Macq

Implementing appropriate interaction for Virtual Reality (VR) applications is one of the most challenging tasks that a developer has to face. This challenge is due to both technical and theoretical factors. First, from a technical point of view, the developer does not only have to deal with nonstandard devices, he has to facilitate their use in a parallel a coordinated way, interweaving the fields of 3D and multimodal interaction. Secondly, from a theoretical point of view, he has to design the interaction almost from scratch, as a standard set of interaction techniques and interactive tasks has not been identified. All these factors are reflected in the absence of appropriate tools to implement VR interaction techniques. In this paper, some existing tools that aim at the development of VR interaction techniques are studied, analysing their strengths and, more specifically, their shortcomings, such as the difficulties to integrate them with any VR platform or their absence of a strong conceptual background. Following that, a framework to implement VR interaction techniques is described that provides the required support for multimodal interaction and, also, uses experience gained from the study of the former tools to avoid previous mistakes. Finally, the usage of the resulting framework is illustrated with the development of the interaction techniques of a sample application.


international symposium on multimedia | 2008

High Level Data Fusion on a Multimodal Interactive Applications Platform

Olga Vybornova; H. Mendona; Jean-Yves Lionel Lawson; Benoît Macq

We demonstrate a multimodal high-level data fusion tooling integrated on a platform aimed at the rapid development and prototyping of multimodal applications through user-centered design. The platform embeds a set of pure and combined modalities as reusable components and generic mechanisms for combining modalities with a rich support for multimodal fusion in order to improve the human-computer interaction.


Archive | 2008

Software Engineering for Multimodal Interactive Systems

Laurence Nigay; Jullien Bouchet; David Juras; Benoit Mansoux; Michael Ortega; Marcos Serrano; Jean-Yves Lionel Lawson

The power and versatility of multimodal interfaces results in an increased complexity of the code to be developed. To address this problem of software development, multimodal interfaces therefore make necessary the definition of software development tools that satisfy specific requirements such as the fusion of data from different interaction modalities and the management of multiple processes including support for synchronization and race conditions between distinct interaction modalities. This chapter is a reflection on current software design methods and tools for multimodal interfaces. First we present the PAC-Amodeus software architectural model as a generic conceptual solution for addressing the challenges of data fusion and concurrent processing of data of multimodal interfaces. We then focus on software tools for developing multimodal interfaces. The existing tools dedicated to multimodal interfaces are currently few and limited in scope. Either they address a specific technical problem including the fusion mechanism, the composition of several devices and mutual disambiguation, or they are dedicated to specific interaction modalities such as gesture recognition, speech recognition or the combined usage of speech and gesture. Two platforms, ICARE and OpenInterface are more general and are able to handle various interaction modalities. We describe in detail these two platforms and illustrate them by considering developed multimodal interfaces.


Computer-Aided Engineering | 2011

Multi-domain framework for multimedia archiving using multimodal interaction

Hildeberto Mendonça; Olga Vybornova; Jean-Yves Lionel Lawson; Benoît Macq

Multimedia content is very rich in terms of meaning, and archiving systems need to be improved to consider such richness. This research proposes archiving improvements to extend the ways of describing content, and enhance user interaction with multimedia archiving systems beyond the traditional text typing and mouse pointing. These improvements consider a set of techniques to segment different kinds of media, a set of indexes to annotate the supported segmentation techniques and an extensible multimodal interaction to make multimedia archiving tasks more user friendly.

Collaboration


Dive into the Jean-Yves Lionel Lawson's collaboration.

Top Co-Authors

Avatar

Benoît Macq

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Jean Vanderdonckt

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Olga Vybornova

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Laurence Nigay

Joseph Fourier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hildeberto Mendonça

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Suzanne Kieffer

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Annabelle Gouze

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

H. Mendona

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge