Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Quesada is active.

Publication


Featured researches published by Luis Quesada.


Advances in Human Factors and Systems Interaction. AHFE 2017. Advances in Intelligent Systems and Computing, vol 592. Springer, Cham | 2017

Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces

Gustavo López; Luis Quesada; Luis A. Guerrero

Natural User Interfaces (NUI) are supposed to be used by humans in a very logic way. However, the run to deploy Speech-based NUIs by the industry has had a large impact on the naturality of such interfaces. This paper presents a usability test of the most prestigious and internationally used Speech-based NUI (i.e., Alexa, Siri, Cortana and Google’s). A comparison of the services that each one provides was also performed considering: access to music services, agenda, news, weather, To-Do lists and maps or directions, among others. The test was design by two Human Computer Interaction experts and executed by eight persons. Results show that even though there are many services available, there is a lot to do to improve the usability of these systems. Specially focused on separating the traditional use of computers (based on applications that require parameters to function) and to get closer to real NUIs.


ubiquitous computing | 2015

Sign Language Recognition Using Leap Motion

Luis Quesada; Gustavo López; Luis A. Guerrero

Several million people around the world use signs as their main mean of communication. The advances in technologies to recognize such signs will make possible the computer supported interpretation of sign languages. There are more than 137 different sign language around the world; therefore, a system that interprets those languages could be beneficial to all, including the Deaf Community. This paper presents a system based on a hand tracking device called Leap Motion, used for signs recognition. The system uses a Support Vector Machine for sign classification. We performed three different evaluations of our system with over 24 people.


ambient intelligence | 2017

Automatic recognition of the American sign language fingerspelling alphabet to assist people living with speech or hearing impairments

Luis Quesada; Gustavo López; Luis A. Guerrero

Sign languages are natural languages used mostly by deaf and hard of hearing people. Different development opportunities for people with these disabilities are limited because of communication problems. The advances in technology to recognize signs and gestures will make computer supported interpretation of sign languages possible. There are more than 137 different sign languages around the world; therefore, a system that interprets them could be beneficial to all, especially to the Deaf Community. This paper presents a system based on hand tracking devices (Leap Motion and Intel RealSense), used for signs recognition. The system uses a Support Vector Machine for sign classification. Different evaluations of the system were performed with over 50 individuals; and remarkable recognition accuracy was achieved with selected signs (100% accuracy was achieved recognizing some signs). Furthermore, an exploration on the Leap Motion and the Intel RealSense potential as a hand tracking devices for sign language recognition using the American Sign Language fingerspelling alphabet was performed.


Advances in Design for Inclusion (pp 463-473). Springer, Cham | 2016

Web Accessibility for People with Reduced Mobility: A Case Study Using Eye Tracking

Emmanuel Arias; Gustavo López; Luis Quesada; Luis A. Guerrero

Traditional web interfaces often rely on keyboard/mouse input to work. This characteristic forces people with reduced mobility to adapt or do not use the applications at all. This paper proposes a prototype to increase web accessibility for people with reduced mobility. Our prototype proposes an eye gaze based interaction between the user and web browsers displaying a web site (compliant with the web content accessibility guidelines proposed by the World Wide Web Consortium). We implemented a plug-in that adds functionality to allow user navigation, cursor control and text input.


international workshop on ambient assisted living | 2015

A Gesture-Based Interaction Approach for Manipulating Augmented Objects Using Leap Motion

Gustavo López; Luis Quesada; Luis A. Guerrero

Ambient Intelligence and Ubiquitous Computing are carrying the world to a reality where almost every object interacts with the environment, either via sensors or actuators, and users must learn how to interact with such systems. This paper presents a gesture-based interaction approach to manipulate such objects. We developed a prototype using a leap motion controller as a hand-tracking device, and a Support Vector Machine as a classifier to distinguish between gestures. Our system was evaluated by 12 users with over 10 commands. We also show a review on gesture-based interaction and compare other proposals with ours.


ubiquitous computing | 2017

Multiplatform Career Guidance System Using IBM Watson, Google Home and Telegram

Daniel Calvo; Luis Quesada; Gustavo López; Luis A. Guerrero

Even with the availability of several tests to provide clarity in choosing our career path, the decision remains a tough one to undertake. Most of the available tests are either monotonous, resulting in a tedious effort to go through them entirely, or are just plain boring. In this paper, however, we present a new and different approach to career guidance systems. We use Google home as a speech-based interface and Telegram as a text-based interface to generate a conversation between the users and a bot for career guidance. The idea is to provide an easy and friendly interface with an interactive user experience while gathering the required data to provide career guidance. To evaluate the system, we used the University of Costa Rica’s Computer Science and Informatics Department scenario. In this scenario, students must decide between three possible emphases: Software Engineering, Information Technologies, and Computer Science. A usability and user experience evaluation of the system was performed with the participation of 72 freshmen.


international conference on health informatics | 2017

A Model Proposal for Augmented Reality Game Creation to Incentivize Physical Activity

José Antonio Cicció; Luis Quesada

UCR::Vicerrectoria de Docencia::Ingenieria::Facultad de Ingenieria::Escuela de Ciencias de la Computacion e Informatica


International Conference on Applied Human Factors and Ergonomics | 2017

Framework for Creating Audio Games for Intelligent Personal Assistants

José Antonio Cicció; Luis Quesada

Intelligent Personal Assistant (IPA) has experienced an important market growth, therefore, an increase in its development by having more people interested in devices that use this software. This opens possibilities for develop new games that people can be interested in. This article presents a framework proposal to create audio games using IPA enabled devices. In order to evaluate the framework, a prototype was designed and presented to 30 participants. The results obtained indicated that a 97.6% of the interviewees were attracted to the idea of playing a game using and IPA.


International Conference on Applied Human Factors and Ergonomics | 2017

Developing a Proxy Service to Bring Naturality to Amazon’s Personal Assistant “Alexa”

Luis Carvajal; Luis Quesada; Gustavo López; Jose A. Brenes

Amazon’s Alexa is an intelligent personal assistant, developed to be used jointly with a Bluetooth Speaker and microphone hardware called Amazon Echo. Even though Alexa is supposed to be a natural user interface, its use is not very natural. As a user is intending to use Amazon’s Alexa system, they must follow a very tighten and structured way to provide the commands for the system to achieve its goal. In this paper, we propose a proxy service called “Plis”. This service was developed as a Skill to be used with the Amazon Echo. Therefore, a user can say: Alexa, “plis” and our functionality will start its job. Our skill determines from the natural way in which the user speaks what they are asking for. With that information, we create a query that would be further sent to other skills already providing the functionality required.


ubiquitous computing | 2016

Sign Language Recognition Model Combining Non-manual Markers and Handshapes

Luis Quesada; Gabriela Marín; Luis A. Guerrero

People with disabilities have fewer opportunities. Technological developments should be used to help these people to have more opportunities. In this paper we present partial results of a research project which aims to help people with disabilities, specifically deaf and hard of hearing. We present a sign language recognition model. The model takes advantage of the natural user interfaces (NUI) and a classification algorithm (support vector machines). Moreover, we combine handshapes (signs) and non-manual markers (associated to emotions and face gestures) in the recognition process to enhance the sign language expressivity recognition. Additionally, non-manual markers representation is proposed. A model evaluation is also reported.

Collaboration


Dive into the Luis Quesada's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gustavo López

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Daniel Calvo

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Arias

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jose A. Brenes

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Luis Carvajal

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Adrian Lara

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Gustavo López

University of Costa Rica

View shared research outputs
Researchain Logo
Decentralizing Knowledge