Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Alapetite is active.

Publication


Featured researches published by Alexandre Alapetite.


ubiquitous computing | 2010

Dynamic 2D-barcodes for multi-device Web session migration including mobile phones

Alexandre Alapetite

This article introduces a novel Web architecture that supports session migration in multi-device Web applications, particularly the case when a user starts a Web session on a computer and wishes to continue on a mobile phone. The proposed solution for transferring the needed session identifiers across devices is to dynamically generate pictures of 2D-barcodes containing a Web address and a session ID in an encoded form. 2D-barcodes are a cheap, fast and robust approach to the problem. They are widely known and used in Japan, and are spreading in other countries. Variations on the topic are covered in the article, including a possible migration from a mobile device to a computer (opposite direction), and between two or more mobile phones (possibly back and forth). The results show that this HCI approach is inexpensive, efficient, and works with most camera-phones on the market; the author does see any other mature technique with such assets.


conference on the future of the internet | 2015

ALMANAC: Internet of Things for Smart Cities

Dario Bonino; Maria Teresa Delgado Alizo; Alexandre Alapetite; Thomas Gilbert; Mathias Axling; Helene Udsen; Jose Angel Carvajal Soto; Maurizio A. Spirito

Smart cities advocate future environments where sensor pervasiveness, data delivery and exchange, and information mash-up enable better support of every aspect of (social) life in human settlements. As this vision matures, evolves and is shaped against several application scenarios, and adoption perspectives, a common need for scalable, pervasive, flexible and replicable infrastructures emerges. Such a need is currently fostering new design efforts to grant performance, reuse and interoperability while avoiding knowledge silos typical of early efforts on similar top is, e.g. Automation in buildings and homes. This paper introduces a federated smart city platform (SCP) developed in the context of the ALMANAC FP7 EU project and discusses lessons learned during the first experimental application of the platform to a smart waste management scenario in a medium-sized, European city. The ALMANAC SCP aims to integrate Internet of Things (IoT), capillary networks and metro access networks to offer smart services to the citizens, and thus enable Smart City processes. The key element of the SCP is a middleware supporting semantic interoperability of heterogeneous resources, devices, services and data management. The platform is built upon a dynamic federation of private and public networks, while supporting end-to-end security and privacy. Furthermore, it also enables the integration of services that, although being natively external to the platform itself, allow enriching the set of data and information used by the Smart City applications supported.


International Journal of Medical Informatics | 2008

Impact of noise and other factors on speech recognition in anaesthesia

Alexandre Alapetite

INTRODUCTION Speech recognition is currently being deployed in medical and anaesthesia applications. This article is part of a project to investigate and further develop a prototype of a speech-input interface in Danish for an electronic anaesthesia patient record, to be used in real time during operations. OBJECTIVE The aim of the experiment is to evaluate the relative impact of several factors affecting speech recognition when used in operating rooms, such as the type or loudness of background noises, type of microphone, type of recognition mode (free speech versus command mode), and type of training. METHODS Eight volunteers read aloud a total of about 3600 typical short anaesthesia comments to be transcribed by a continuous speech recognition system. Background noises were collected in an operating room and reproduced. A regression analysis and descriptive statistics were done to evaluate the relative effect of various factors. RESULTS Some factors have a major impact, such as the words to be recognised, the type of recognition and participants. The type of microphone is especially significant when combined with the type of noise. While loud noises in the operating room can have a predominant effect, recognition rates for common noises (e.g. ventilation, alarms) are only slightly below rates obtained in a quiet environment. Finally, a redundant architecture succeeds in improving the reliability of the recognitions. CONCLUSION This study removes some uncertainties regarding the feasibility of introducing speech recognition for anaesthesia records during operations, and provides an overview of the interaction of several parameters that are traditionally studied separately.


International Journal of Medical Informatics | 2008

Speech recognition for the anaesthesia record during crisis scenarios

Alexandre Alapetite

INTRODUCTION This article describes the evaluation of a prototype speech-input interface to an anaesthesia patient record, conducted in a full-scale anaesthesia simulator involving six doctor-nurse anaesthetist teams. OBJECTIVE The aims of the experiment were, first, to assess the potential advantages and disadvantages of a vocal interface compared to the traditional touch-screen and keyboard interface to an electronic anaesthesia record during crisis situations; second, to assess the usability in a realistic work environment of some speech input strategies (hands-free vocal interface activated by a keyword; combination of command and free text modes); finally, to quantify some of the gains that could be provided by the speech input modality. METHODS Six anaesthesia teams composed of one doctor and one nurse were each confronted with two crisis scenarios in a full-scale anaesthesia simulator. Each team would fill in the anaesthesia record, in one session using only the traditional touch-screen and keyboard interface while in the other session they could also use the speech input interface. Audio-video recordings of the sessions were subsequently analysed and additional subjective data were gathered from a questionnaire. Analysis of data was made by a method inspired by queuing theory in order to compare the delays associated to the two interfaces and to quantify the workload inherent to the memorization of items to be entered into the anaesthesia record. RESULTS The experiment showed on the one hand that the traditional touch-screen and keyboard interface imposes a steadily increasing mental workload in terms of items to keep in memory until there is time to update the anaesthesia record, and on the other hand that the speech input interface will allow anaesthetists to enter medications and observations almost simultaneously when they are given or made. The tested speech input strategies were successful, even with the ambient noise. Speaking to the system while working appeared feasible, although improvements in speech recognition rates are needed. CONCLUSION A vocal interface leads to shorter time between the events to be registered and the actual registration in the electronic anaesthesia record; therefore, this type of interface would likely lead to greater accuracy of items recorded and a reduction of mental workload associated with memorization of events to be registered, especially during time constrained situations. At the same time, current speech recognition technology and speech interfaces require user training and user dedication if a speech interface is to be used successfully.


nordic conference on human-computer interaction | 2012

Demo of gaze controlled flying

Alexandre Alapetite; John Paulin Hansen; I. Scott MacKenzie

Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking users point of regard (gaze) on a live video stream from the UAV.


Maritime Policy & Management | 2017

Safe manning of merchant ships: an approach and computer tool

Alexandre Alapetite; Igor Kozine

ABSTRACT In the shipping industry, staffing expenses have become a vital competition parameter. In this paper, an approach and a software tool are presented to support decisions on the staffing of merchant ships. The tool is implemented in the form of a Web user interface that makes use of discrete-event simulation and allows estimation of the workload and of whether different scenarios are successfully performed taking account of the number of crewmembers, watch schedules, distribution of competencies, and others. The software library ‘SimManning’ at the core of the project is provided as open source. The tool is conceived as a support for the maritime authorities, certifying bodies and shipping companies to assess whether a ship is safely manned.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Head and gaze control of a telepresence robot with an HMD

John Paulin Hansen; Alexandre Alapetite; Martin Christen Frølund Thomsen; Zhongyu Wang; Katsumi Minakata; Guangtao Zhang

Gaze interaction with telerobots is a new opportunity for wheelchair users with severe motor disabilities. We present a video showing how head-mounted displays (HMD) with gaze tracking can be used to monitor a robot that carries a 360° video camera and a microphone. Our interface supports autonomous driving via way-points on a map, along with gaze-controlled steering and gaze typing. It is implemented with Unity, which communicates with the Robot Operating System (ROS).


Advances in Human-computer Interaction | 2018

A Rollercoaster to Model Touch Interactions during Turbulence

Alexandre Alapetite; Emilie Møllenbach; Anders Stockmarr; Katsumi Minakata

We contribute to a project introducing the use of a large single touch-screen as a concept for future airplane cockpits. Human-machine interaction in this new type of cockpit must be optimised to cope with the different types of normal use as well as during moments of turbulence (which can occur during flights varying degrees of severity). We propose an original experimental setup for reproducing turbulence (not limited to aviation) based on a touch-screen mounted on a rollercoaster. Participants had to repeatedly solve three basic touch interactions: a single click, a one-finger drag-and-drop, and a zoom operation involving a 2-finger pinching gesture. The completion times of the different tasks as well as the number of unnecessary interactions with the screen constitute the collected user data. We also propose a data analysis and statistical method to combine user performance with observed turbulence, including acceleration and jerk along the different axes. We then report some of the implications of severe turbulence on touch interaction and make recommendations as to how this can be accommodated in future design solutions.


the internet of things | 2016

Dynamic Bluetooth beacons for people with disabilities

Alexandre Alapetite; John Paulin Hansen

This paper focuses on digital aids for sight impairment and motor disabilities. We propose an Internet of Things (IoT) platform for discovering nearby items, getting their status, and interacting with them by e.g. voice commands or gaze gestures. The technology is based on Bluetooth Low Energy, which is included in consumer electronics such as smartphones without requiring additional hardware. The paper presents a prototype platform illustrated by concepts of use.


Interactions (New York) | 2013

Gaze-Controlled Flying

Alexandre Alapetite; John Paulin Hansen; I. Scott MacKenzie

Demo Hour highlights new prototypes and projects that exemplify innovation and novel forms of interaction. Audrey Desjardins, Editor

Collaboration


Dive into the Alexandre Alapetite's collaboration.

Top Co-Authors

Avatar

John Paulin Hansen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Henning Boje Andersen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Witzner Hansen

IT University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Jacob Thommesen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Katsumi Minakata

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Emilie Møllenbach

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dario Bonino

Istituto Superiore Mario Boella

View shared research outputs
Researchain Logo
Decentralizing Knowledge