Lucas Silva Figueiredo
Federal University of Pernambuco
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lucas Silva Figueiredo.
symposium on 3d user interfaces | 2012
A. Da Gama; Thiago Chaves; Lucas Silva Figueiredo; Veronica Teichrieb
In general, the motor rehabilitation process can take advantage of natural interaction based systems, including measurements from patient performance to track its evolution during time and therapy direction. Thus, the aim of this research is the analysis of the use of Kinect sensor as interaction support tool for rehabilitation systems. The Kinect sensor gives three-dimensional information about the user body, recognizing skeleton and joint positions, however does not provide the detection of the body specific movements. This way, the correct description of a rehabilitation movement (shoulder abduction, for instance) was implemented in a system prototype. A scoring mechanism was also developed in order to measure the patient performance, as well as to stimulate his improvement by displaying a positive feedback.
2009 VIII Brazilian Symposium on Games and Digital Entertainment | 2009
Lucas Silva Figueiredo; João Marcelo X. N. Teixeira; Aline Cavalcanti; Veronica Teichrieb; Judith Kelner
This paper presents an open-source framework for developing guitar-based games using gesture interaction. The goal of this work was to develop a robust platform capable of providing seamless real time interaction, intuitive playability and coherent sound output. Each part of the proposed architecture is detailed and a case study is performed to exemplify its easiness of use. Some tests are also performed in order to validate the proposed platform. The results showed to be successful: all tested subjects could reach the objective of playing a simple song during a small amount of time and the most important, they were satisfied with the experience.
2012 14th Symposium on Virtual and Augmented Reality | 2012
Thiago Chaves; Lucas Silva Figueiredo; Alana Elza Fontes Da Gama; Cristiano de Araújo; Veronica Teichrieb
The computational implementation of human body gestures recognition has been a challenge for several years. Nowadays, thanks to the development of RGB-D cameras it is possible to acquire a set of data that represents a human position in time. Despite that, these cameras provide raw data, still being a problem to identify in real-time a specific pre-defined user movement without relying on offline training. However, in several cases the real-time requisite is critical, especially when it is necessary to detect and analyze a movement continuously, as in the tracking of physiotherapeutic movements or exercises. This paper presents a simple and fast technique to recognize human movements using the set of data provided by a RGB-D camera. Moreover, it describes a way to identify not only if the performed motion is valid, i.e. belongs to a set of pre-defined gestures, but also the identification of at which point the motion is (beginning, end or somewhere in the middle of it). The precision of the proposed technique can be set to suit the needs of the application and has a simple and fast way of gesture registration, thus, being easy to set new motions if necessary. The proposed technique has been validated through a set of tests focused on analyzing its robustness considering a series of variations during the interaction like fast and complex gestures.
human factors in computing systems | 2015
Lucas Silva Figueiredo; Mariana Pinheiro; Edvar Vilar Neto; Veronica Teichrieb
In Science Fiction (Sci-Fi) movies, filmmakers try to anticipate trends and new forms of interaction. Metaphors are created allowing their characters to interact with devices and futuristic environments. These devices and metaphors may be target of research considering they have proven to be useful before, as for example the Star Trek communicator (1966) similarity to the Motorola StarTAC phone (1996). Moreover, the impact of the new interfaces on the audience may indicate their expectations regarding future gesture interactions. Thus, the goal of this work is to collect and expose a compilation of hand interactions in Sci-Fi movies, providing an open catalog to researchers as resource to future discussions. The data visualization and analysis is facilitated through an open-source web application. Additionally, we classify the collected data according to a series of established criteria and our own knowledge.
international symposium on mixed and augmented reality | 2013
Lucas Silva Figueiredo; Ronaldo Dos Anjos; Jorge Eduardo Falcao Lindoso; Edvar Vilar Neto; Rafael Alves Roberto; Manoela Silva; Veronica Teichrieb
In this work in progress we address the problem of interacting with augmented objects. A bare hand tracking technique is developed, which allied to gesture recognition heuristics, enables interaction with augmented objects in an intuitive way. The tracking algorithm uses a flock of features approach that tracks both hands in real time. The interaction occurs by the execution of grasp and release gestures. Physics simulation and photorealistic rendering are added to the pipeline. This way, the tool provides more coherent feedback in order to make the virtual objects look and respond more likely real ones. The pipeline was tested through specific tasks, designed to analyze its performance regarding the easiness of use, precision and response time.
2016 XVIII Symposium on Virtual and Augmented Reality (SVR) | 2016
João Gabriel Abreu; João Marcelo X. N. Teixeira; Lucas Silva Figueiredo; Veronica Teichrieb
The successful recognition of sign language gestures by computer systems would greatly improve communications between the deaf and the hearers. This work evaluates the usage of electromyogram (EMG) data provided by the Myo armband as features for classification of 20 stationary letter gestures from the Brazilian Sign Language (LIBRAS) alphabet. The classification was performed by binary Support Vector Machines (SVMs), trained with a one-vs-all strategy. The results obtained show that it is possible to identify the gestures, but substantial limitations were found that would need to be tackled by further studies.
Expert Systems With Applications | 2017
Joo Paulo Lima; Rafael Alves Roberto; Francisco Simes; Mozart William Santos Almeida; Lucas Silva Figueiredo; Joo Marcelo Teixeira; Veronica Teichrieb
Markerless tracking system for augmented reality targeting the automotive sector.System evaluation during the Volkswagen/ISMAR Tracking Challenge 2014.Additional studies in similar competition scenarios created by the authors.System suitable for tracking vehicle exterior / parts and high precision tracking. This paper presents a complete natural feature based tracking system that supports the creation of augmented reality applications focused on the automotive sector. The proposed pipeline encompasses scene modeling, system calibration and tracking steps. An augmented reality application was built on top of the system for indicating the location of 3D coordinates in a given environment which can be applied to many different applications in cars, such as a maintenance assistant, an intelligent manual, and many others. An analysis of the system was performed during the Volkswagen/ISMAR Tracking Challenge 2014, which aimed to evaluate state-of-the-art tracking approaches on the basis of requirements encountered in automotive industrial settings. A similar competition environment was also created by the authors in order to allow further studies. Evaluation results showed that the system allowed users to correctly identify points in tasks that involved tracking a rotating vehicle, tracking data on a complete vehicle and tracking with high accuracy. This evaluation allowed also to understand the applicability limits of texture based approaches in the textureless automotive environment, a problem not addressed frequently in the literature. To the best of the authors knowledge, this is the first work addressing the analysis of a complete tracking system for augmented reality focused on the automotive sector which could be tested and validated in a major benchmark like the Volkswagen/ISMAR Tracking Challenge, providing useful insights on the development of such expert and intelligent systems.
ieee symposium on security and privacy | 2016
Lucas Silva Figueiredo; Benjamin Livshits; David Molnar; Margus Veanes
With the rise of sensors such as the Microsoft Kinect, Leap Motion, and hand motion sensors in phones (i.e., Samsung Galaxy S6), gesture-based interfaces have become practical. Unfortunately, today, to recognize such gestures, applications must have access to depth and video of the user, exposing sensitive data about the user and her environment. Besides these privacy concerns, there are also security threats in sensor-based applications, such as multiple applications registering the same gesture, leading to a conflict (akin to Clickjacking on the web). We address these security and privacy threats with Prepose, a novel domain-specific language (DSL) for easily building gesture recognizers, combined with a system architecture that protects privacy, security, and reliability with untrusted applications. We run Prepose code in a trusted core, and only return specific gesture events to applications. Prepose is specifically designed to enable precise and sound static analysis using SMT solvers, allowing the system to check security and reliability properties before running a gesture recognizer. We demonstrate that Prepose is expressive by creating gestures in three representative domains: physical therapy, tai-chi, and ballet. We further show that runtime gesture matching in Prepose is fast, creating no noticeable lag, as measured on traces from Microsoft Kinect runs. To show that gesture checking at the time of submission to a gesture store is fast, we developed a total of four Z3-based static analyses to test for basic gesture safety and internal validity, to make sure the so-called protected gestures are not overridden, and to check inter-gesture conflicts. Our static analysis scales well in practice: safety checking is under 0.5 seconds per gesture, average validity checking time is only 188ms, lastly, for 97% of the cases, the conflict detection time is below 5 seconds, with only one query taking longer than 15 seconds.
international conference of design, user experience, and usability | 2014
Lucas Silva Figueiredo; Edvar Vilar Neto; Ermano Arruda; João Marcelo X. N. Teixeira; Veronica Teichrieb
The goal of this work is to analyze the user experience of the motion parallax effect on common use displays, such as monitors, tvs and mobile devices. The analysis has been done individually for each device and comparing each other to understand the impact on the immersion of such media. Moreover, we focused on understanding the user impression on the change of an usual passive visualization paradigm to the interactive visualization possibility allied to the motion parallax effect.
international conference of design user experience and usability | 2014
Lucas Silva Figueiredo; Mariana Pinheiro; Edvar Vilar Neto; Thiago Menezes; João Marcelo X. N. Teixeira; Veronica Teichrieb; Pedro Alessio; Daniel Queiroz de Freitas
Here we address the problem of navigating in virtual environments with fixed display visualizations (e.g. projections and tvs) by using natural gestures. Gesture metaphors have proven to be a powerful tool for human computer interaction. Examples arise from smartphones to state of the art projects like the Holodesk (from Microsoft Research). However, regarding the use of gestures for navigation in virtual environments, a specific limitation arises in respect to the user movimentation in the real space. The gestures should provide the user a way of turning the virtual camera direction without losing the view of the screen. Moreover, the user must be able to move long distances in the virtual environment without trespassing real world boundaries and without becoming fatigued.