Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yves Rybarczyk is active.

Publication


Featured researches published by Yves Rybarczyk.


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Kinect-Sign: Teaching Sign Language to “Listeners” through a Game

João Gameiro; Tiago Cardoso; Yves Rybarczyk

Sign language is the hearing impaired form of communicating with other people, including listeners. Most cases, impaired people have learned sign language form childhood. The problem arises when a listener comes in contact with an impaired person. For instances, if a couple has a child which is impaired, the parents find a challenge to learn the sign language. In this article, a new playful approach to assist the listeners to learn sign language is proposed. This proposal is a serious game composed of two modes: School-mode and Competition-mode. The first offers a virtual school where the user learns to sign letters and the second offers an environment towards applying the learned letters. Behind the scenes, the proposal contains a sign language recognition system, based on three modules: 1 – the standardization of the Kinect depth camera data; 2 – a gesture library relying on the standardized data; and 3 – the real-time recognition of gestures. A prototype was developed – Kinect-Sign – and tested in a Portuguese Sign-Language school and on eNTERFACE’13 resulting in a joyful acceptance of the approach.


Procedia Technology | 2014

Kinect-Sign, Teaching Sign Language to “Listeners” through a Game

João Gameiro; Tiago Cardoso; Yves Rybarczyk

Abstract The sign language is widely used by deaf people around the globe. As the spoken languages, several sign languages do exist. The way sign language is learned by deaf people may have some details to be improved, but one can state that the existing learning mechanisms are effective when we talk about a deaf child, for example. The problem arises for the non-deaf persons that communicate with the deaf persons – the so-called listeners. If, for example, one couple has a new child that turns to be deaf, these two persons find a challenge to learn the sign language. In one hand, they cannot stop their working life, especially because of this sad news turns to be more costly, on the other hand, the existing mechanisms target the deaf-persons and are not prepared for the listeners. This paper proposes a new playful approach to help these listeners to learn the sign language. The proposal is a serious game composed of two modes: School-mode and Competition-mode . The first provides a school-like environment where the user learns the letter-signs and the second provides the user an environment used towards testing the learned skills. Behind the scenes, the proposal is based on two phases: 1 – the creation of a gestures library, relying on the Kinect depth camera; and 2 – the real-time recognition of gestures, by comparing what the depth camera information to the existing gestures previously stored in the library. A prototype system, supporting only the Portuguese sign language alphabet, was developed – the Kinect-Sign – and tested in a Portuguese Sign-Language school resulting in a joyful acceptance of the approach.


Journal of Electrical and Computer Engineering | 2017

Modeling PM2.5 Urban Pollution Using Machine Learning and Selected Meteorological Parameters

Jan Kleine Deters; Rasa Zalakeviciute; Mario González; Yves Rybarczyk

Outdoor air pollution costs millions of premature deaths annually, mostly due to anthropogenic fine particulate matter (or PM2.5). Quito, the capital city of Ecuador, is no exception in exceeding the healthy levels of pollution. In addition to the impact of urbanization, motorization, and rapid population growth, particulate pollution is modulated by meteorological factors and geophysical characteristics, which complicate the implementation of the most advanced models of weather forecast. Thus, this paper proposes a machine learning approach based on six years of meteorological and pollution data analyses to predict the concentrations of PM2.5 from wind (speed and direction) and precipitation levels. The results of the classification model show a high reliability in the classification of low ( 25 µg/m3) and low (<10 µg/m3) versus moderate (10–25 µg/m3) concentrations of PM2.5. A regression analysis suggests a better prediction of PM2.5 when the climatic conditions are getting more extreme (strong winds or high levels of precipitation). The high correlation between estimated and real data for a time series analysis during the wet season confirms this finding. The study demonstrates that the use of statistical models based on machine learning is relevant to predict PM2.5 concentrations from meteorological data.


9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013 | 2013

Touching Virtual Agents: Embodiment and Mind

Gijs Huisman; Merijn Bruijnes; Jan Kolkmeier; Merel Madeleine Jung; Aduén Darriba Frederiks; Yves Rybarczyk

In this paper we outline the design and development of an embodied conversational agent setup that incorporates an augmented reality screen and tactile sleeve. With this setup the agent can visually and physically touch the user. We provide a literature overview of embodied conversational agents, as well as haptic technologies, and argue for the importance of adding touch to an embodied conversational agent. Finally, we provide guidelines for studies involving the touching virtual agent (TVA) setup.


Frontiers in Psychology | 2012

Effect of Temporal Organization of the Visuo-Locomotor Coupling on the Predictive Steering

Yves Rybarczyk; Daniel Mestre

Studies on the direction of a driver’s gaze while taking a bend show that the individual looks toward the tangent-point of the inside curve. Mathematically, the direction of this point in relation to the car enables the driver to predict the curvature of the road. In the same way, when a person walking in the street turns a corner, his/her gaze anticipates the rotation of the body. A current explanation for the visuo-motor anticipation over the locomotion would be that the brain, involved in a steering behavior, executes an internal model of the trajectory that anticipates the completion of the path, and not the contrary. This paper proposes to test this hypothesis by studying the effect of an artificial manipulation of the visuo-locomotor coupling on the trajectory prediction. In this experiment, subjects remotely control a mobile robot with a pan-tilt camera. This experimental paradigm is chosen to manipulate in an easy and precise way the temporal organization of the visuo-locomotor coupling. The results show that only the visuo-locomotor coupling organized from the visual sensor to the locomotor organs enables (i) a significant smoothness of the trajectory and (ii) a velocity-curvature relationship that follows the “2/3 Power Law.” These findings are consistent with the theory of an anticipatory construction of an internal model of the trajectory. This mental representation used by the brain as a forward prediction of the formation of the path seems conditioned by the motor program. The overall results are discussed in terms of the sensorimotor scheme bases of the predictive coding.


world conference on information systems and technologies | 2016

3D Markerless Motion Capture: A Low Cost Approach

Yves Rybarczyk

A markerless motion capture technique is described for reconstructing three-dimensional biological motion. In the first stage of the process, an action is recorded with 2 CCD webcams. Then, the video is divided in frames. For each frame, the 2D coordinates of key locations (body joints) are extracted by the combination of manual identification (mouse pointing) and image processing (blobs matching). Finally, an algorithm computes the X-Y coordinates from each camera view to generate a file containing the 3D coordinates of every visible point in the display. This technique has many advantages over other methods. It does not require too specialized equipment. The computer programming uses open source software. The technology is based on an inexpensive portable device. Moreover, it can be used for different environments (indoor/outdoor) and living beings (human/animal). This system has already been tested in a wide range of applications, such as avatars modeling and psychophysical studies.


IEEE Latin America Transactions | 2016

Educative therapeutic tool to promote the empowerment of disabled people

Yves Rybarczyk; Didier Vernay

The therapeutic patient education (TPE) aims at helping patients to understand diseases and treatments, and collaborate in healthcare by taking an active role in the management of a chronic disease. This transition from the classical patient compliance to empowerment is a revolutionary concept in medicine. However, this consensual idea is not easy to implement. A patient with a chronic disease is surrounded by several health professionals who use their own technical language, which impedes the interdisciplinary exchange of information between specialists and from the caregiver to the patient. In order to enhance the TPE, we have developed a Web-based application that permits a customized and strategic evaluation of neuromuscular deficiencies as a whole. This projects main challenge is to construct a tool which is accessible for ordinary people but still maintains useful information for professionals. The proposed approach promotes an active involvement of the patient, by allowing for self-evaluation. Results from a usability test show a large satisfaction of the participants in using the software.


EAI Endorsed Transactions on Creative Technologies | 2014

Effect of avatars and viewpoints on performance in virtual world: efficiency vs. telepresence

Yves Rybarczyk; Thereza Christina Bahia Coelho; Tiago Cardoso; R. de Oliveira

An increasing number of our interactions are mediated through e-technologies. In order to enhance the human’s feeling of presence into these virtual environments, also known as telepresence, the individual is usually embodied into an avatar. The natural adaptation capabilities, underlain by the plasticity of the body schema, of the human being make a body ownership of the avatar possible, in which the user feels more like his/her virtual alter ego than himself/herself. However, this phenomenon only occurs under specific conditions. Two experiments are designed to study the human’s feeling and performance according to a scale of natural relationship between the participant and the avatar. In both experiments, the human-avatar interaction is carried out by a Natural User Interface (NUI) and the individual’s performance is assessed through a behavioural index, based on the concept of affordances, and a questionnaire of presence The first experiment shows that the feeling of telepresence and ownership seem to be greater when the avatar’s kinematics and proportions are close to those of the user. However, the efficiency to complete the task is higher for a more mechanical and stereotypical avatar. The second experiment shows that the manipulation of the viewpoint induces a similar difference across the sessions. Results are discussed in terms of the neurobehavioral processes underlying performance in virtual worlds, which seem to be based on ownership when the virtual artefact ensures a preservation of sensorimotor contingencies, and simple geometrical mapping when the conditions become more artificial.


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Body Ownership of Virtual Avatars: An Affordance Approach of Telepresence

Tiago Coelho; Rita F. de Oliveira; Tiago Cardoso; Yves Rybarczyk

Virtual environments are an increasing trend in today’s society. In this scope, the avatar is the representation of the user in the virtual world. However, that relationship lacks empirical studies regarding the nature of the interaction between avatars and human beings. For that purpose it was studied how the avatar’s modeled morphology and dynamics affect its control by the user. An experiment was conducted to measure telepresence and ownership on participants who used a Kinect Natural User Interface (NUI). The body ownership of different avatars was assessed through a behavioral parameter, based on the concept of affordances, and a questionnaire of presence. The results show that the feelings of telepresence and ownership seem to be greater when the kinematics and the avatar’s proportions are closer to those of the user.


world conference on information systems and technologies | 2017

ePHoRt Project: A Web-Based Platform for Home Motor Rehabilitation

Yves Rybarczyk; Jan Kleine Deters; Arían Ramón Aladro Gonzalvo; Mario González; Santiago Villarreal; Danilo Esparza

ePHoRt is a project that aims to develop a web-based system for the remote monitoring of rehabilitation exercises in patients after hip replacement surgery. The tool intends to facilitate and enhance the motor recovery, due to the fact that the patients will be able to perform the therapeutic movements at home and at any time. As in any case of rehabilitation program, the time required to recover is significantly diminished when the individual has the opportunity to practice the exercises regularly and frequently. However, the condition of such patients prohibits transportations to and from medical centers and many of them cannot afford a private physiotherapist. Thus, low-cost technologies will be used to develop the platform, with the aim to democratize its access. By taking into account such a limitation, a relevant option to record the patient’s movements is the Kinect motion capture device. The paper describes an experiment that evaluates the validity and accuracy of this visual capture by a comparison to an accelerometer sensor. The results show a significant correlation between both systems and demonstrate that the Kinect is an appropriate tool for the therapeutic purpose of the project.

Collaboration


Dive into the Yves Rybarczyk's collaboration.

Top Co-Authors

Avatar

Santiago Villarreal

Universidad de las Américas Puebla

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tiago Cardoso

Universidade Nova de Lisboa

View shared research outputs
Top Co-Authors

Avatar

Mario González

Universidad de las Américas Puebla

View shared research outputs
Top Co-Authors

Avatar

Danilo Esparza

Universidad de las Américas Puebla

View shared research outputs
Top Co-Authors

Avatar

Patricia Acosta-Vargas

Universidad de las Américas Puebla

View shared research outputs
Top Co-Authors

Avatar

Rasa Zalakeviciute

Universidad de las Américas Puebla

View shared research outputs
Top Co-Authors

Avatar

Isabel L. Nunes

Universidade Nova de Lisboa

View shared research outputs
Top Co-Authors

Avatar

Sandra Sanchez-Gordon

National Technical University

View shared research outputs
Top Co-Authors

Avatar

Tania Calle-Jimenez

National Technical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge