João Sena Esteves
University of Minho
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by João Sena Esteves.
international conference on industrial technology | 2006
João Sena Esteves; Adriano Carvalho; Carlos Couto
Triangulation with active beacons is widely used in the absolute localization of mobile robots. The original generalized geometric triangulation algorithm suffers only from the restrictions that are common to all algorithms that perform self-localization through triangulation. But it is unable to compute position and orientation when the robot is over the segment of the line that goes by beacons 1 and 2 whose origin is beacon 1 and does not contain beacon 2. An improved version of the algorithm allows self-localization even when the robot is over that line segment. Simulations results suggest that a robot is able to localize itself, with small position and orientation errors, over a wide region of the plane, if measurement uncertainty is small enough.
Archive | 2017
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves; Joana Figueiredo; Cristina P. Santos; Ana Paula da Silva Pereira
Systems and devices that can recognize human affects have been in development for a considerable time. Facial features are usually extracted by using video data or a Microsoft Kinect sensor. The present paper proposes an emotion recognition system that uses the recent Intel RealSense 3D sensor, whose reliability and validity in the field of emotion recognition has not yet been studied. This preliminary work focus on happiness and sadness. The system extracts the user’s facial Action Units and head motion data. Then, it uses a Support Vector Machine to automatically classify the emotion expressed by the user. The results point out the adequacy of Intel RealSense for facial features extraction in emotion recognition systems as well as the importance of determining head motion when recognizing sadness.
international conference on ultra modern telecommunications | 2015
Filomena Soares; João Sena Esteves; Vítor Carvalho; Gil Lopes; Fabio Barbosa; Patricia Ribeiro
Human-computer interaction by gesture recognition has been a technology increasingly powerful and with a wide range of applications. The aim of this research is to take advantage of these resources and tools, developing an application for gesture recognition in Portuguese Sign Language (PSL) focused in helping disabled deaf and/or mute and hearing children to learn PSL. The PSL involves the use of hands and facial expressions to interact, where the output of the communication (“talk”) comes from the hands and/or body and the input comes from the eyes observation (“hear”). This paper presents a serious game for children in the 1st cycle (primary Portuguese school) between the ages of 6 to 10 years. The game is based on a story, in order to promote the motivation of the students in the learning of the Portuguese Sign Language.
ieee portuguese meeting on bioengineering | 2017
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves
The face embodies a large portion of the human emotionally expressive behaviour. Moreover, facial expressions are used to display emotional states and to manage interactions, being one of the most important channels of non-verbal communication. Although the process of recognition and displaying emotions is an easy task for the majority of humans, it is a very difficult task for individuals with Autism Spectrum Disorders (ASD). The present paper is a summary of a work developed under a Master Thesis on Industrial Electronics Engineering and Computers from the University of Minho, Portugal. The main goal of the work was the development and application of interactive and assistive technologies to support and promote new adaptive teaching/learning approaches for children with ASD. Therefore, it was proposed a system that uses the recent Intel RealSense 3D sensor to promote imitation and recognition of facial expressions, using a RoboKind Zeno R50 robot (ZECA) as a mediator in social activities. The system was first validated in the research laboratory and then tested in school environment with typically developing children and children with ASD.
Archive | 2017
Vinicius Corrêa Alves Silva; Pedro Leite; Filomena Soares; João Sena Esteves; Sandra Costa
This work describes the design, implementation, and preliminary tests of a system that uses a humanoid robot to mimic non-standard upper members gestures of a human body. The final goal is to use the robot as a mediator in motor imitation activities with children with special needs, either cognitive or motor impairments. A Kinect sensor and the humanoid robot ZECA (a Zeno R-50 robot from Hanson RoboKind) are used to identify and mimic upper members gestures. The system allows direct control of the humanoid robot by the user. The proposed system was tested in laboratory environment with adults with typical development. Furthermore, the system was tested with three children between 4 and 12 years old with motor and cognitive difficulties in a clinical-like environment. The main goal of these preliminary tests was to detect the constraints of the system.
international conference on ultra modern telecommunications | 2016
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves; Joana Figueiredo; Celina Pinto Leão; Cristina P. Santos; Ana Paula da Silva Pereira
This paper presents the experimental setup and methodology for a real-time emotions recognition system, based on the recent Intel RealSense 3D sensor, to identify six emotions: happiness, sadness, anger, surprise, fear, and neutral. The process includes the database construction, with 43 participants, based on facial features extraction and a multiclass Support Vector Machine classifier. The system was first tested offline using Linear kernel and Radial Basis Function (RBF) kernel. In the offline evaluation, the system performance was quantified in terms of confusion matrix, accuracy, sensitivity, specificity, Area Under the Curve, and Mathews Correlation Coefficient metrics. The RBF kernel achieved the best performance, with an average accuracy of 93.6%. Then, the real-time system was evaluated in a laboratorial setup, achieving an overall accuracy of 88%. The time required for the system to perform facial expression recognition efficiently is 1–3ms. The results, obtained by simulation and experimentally, point out that the present system can recognize facial expressions accurately.
international conference on ultra modern telecommunications | 2016
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves
Social skills are an important issue throughout human life. Therefore, systems that can synthesize emotions, for example virtual characters (avatars) and robotic platforms, are gaining special attention in the literature. In particular, those systems may be important tools in order to promote social and emotional competences in children (or adults) that have some communication/interaction impairments. The present paper proposes a mirroring emotion system that uses the recent Intel RealSense 3D sensor along with a humanoid robot. The system extracts the users facial Action Units (AUs) and head motion data. Then, it sends the information to the robot allowing on-line imitation. The first tests were conducted in a laboratorial environment using the software FaceReader in order to verify its correct functioning. Next, a perceptual study was performed to verify the similarity between the expressions of a performer and those of the robot using a quiz distributed to 59 respondents. Finally, the system was evaluated with typically developing children with 6 to 9 years old. The robot mimicked the childrens emotional facial expressions. The results point out that the present system can on-line and accurately map facial expressions of a user onto the robot.
international conference on ultra modern telecommunications | 2015
Filomena Soares; João Sena Esteves; Vítor Carvalho; Carlos Moreira; Pedro Lourenço
This paper describes the preliminary study and development of a videogame aimed at learning the alphabet in Portuguese sign language, through gestures. Leap Motion Controller is used, allowing the detection of fingers and hands with high accuracy and resolution. The project is based on the well-known hangman game. The player inputs the gestures that represent the letters of the alphabet in the sign language. The Leap Trainer framework for gesture and pose learning and recognition was used. This framework is helpful to the development of the proposed game since it allows the recording of a gesture and a subsequent comparison with the recorded gesture, giving a match percentage result. There are many sign language learning games, but they are not interactive. The proposed game is primarily for children but also adequate for adults.
Archive | 2019
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves; Ana Paula da Silva Pereira
In general, humans can express their intents effortlessly. On the contrary, individuals with Autism Spectrum Disorder (ASD) present impairments in this area. Researchers are employing different technological strategies in order to improve the emotion recognition skills of individuals with ASD. Among those technological solutions, the use of Objects based on Playware Technology (OPT) in context of serious games is getting increasing attention. Following this trend, the present work proposes the development of an OPT module to be used as an add-on to the human-robot interaction with children with ASD in emotion recognition activities. To evaluate the proposed approach, usability tests with typically developing children in a school environment were conducted. Overall, the different evaluations allow estimating how the children interacted with the OPT.
Archive | 2018
Vinicius Corrêa Alves Silva; Filomena Soares; João Sena Esteves; Ana Paula da Silva Pereira
Understanding others intention can be a very difficult task for some individuals, in particular, individuals with Autism Spectrum Disorder (ASD). ASD is characterized by difficulties in social communication and restricted patterns of behaviour. In order to mitigate the emotion recognition impairments that individuals with ASD usually present, researchers are employing different technological strategies. Among those technological solutions, the use of assistive robots and Objects based on Playware Technology (OPT) in context of serious games are getting more attention. Following this trend, the present work targets a novel hybrid approach using a humanoid robot and one OPT. The proposed approach consists of a humanoid robot capable of displaying social behaviours, particularly facial expressions, and an OPT called PlayCube. The system was designed for emotion recognition activities with children with ASD. To evaluate the proposed approach, two pilot studies were performed: one with typically developing children and another with children with ASD. Overall, the different evaluations demonstrated the possible positive outcomes that this child-OPT-robot interaction can produce.