Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis A. Guerrero is active.

Publication


Featured researches published by Luis A. Guerrero.


Sensors | 2012

An indoor navigation system for the visually impaired.

Luis A. Guerrero; Francisco Vasquez; Sergio F. Ochoa

Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the users trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.


interaction design and children | 2015

Kiteracy: a kit of tangible objects to strengthen literacy skills in children with down syndrome

Janio Jadán-Guerrero; Javier Jaen; María A. Carpio; Luis A. Guerrero

Kiteracy is an educational kit designed to improve the literacy process of children with Down syndrome by enabling higher levels of interaction. The kit is based on two Spanish literacy methods: global and phonics. In this work, we present a qualitative study based on video-recorded sessions with twelve children from a Down syndrome institution. The study analyzes three forms of interactions: cardboard, multi-touch and tangible. The task carried out by special education teachers and children in the experimental sessions involved working in pairs (Teacher-child) and autonomous self-learning (child only). Through the sessions, we identified situations in which the teacher took the control in the cardboard version. In the multi-touch version, both the teacher and the child shared the control. However in the tangible version the child took the control. In the self-learning sessions, we observed that multi-touch and tangible interaction seems to offer an enjoyable time for children. Surveys and interviews with teachers revealed that tangible objects offered greater adaptability to create playful reading strategies.


International Journal of Human-computer Interaction | 2014

Human–Objects Interaction: A Framework for Designing, Developing and Evaluating Augmented Objects

Gustavo López; Mariana López; Luis A. Guerrero; José Bravo

The processes to design, develop, and evaluate augmented objects are complex and should adhere to a Software Engineering methodology with a user-centered approach. This article presents a framework for creating augmented objects focused on the interaction of the final users with these objects. The article applies the framework in three cases of study: an augmented Post-it note for important e-mail notifications, an augmented pajama for capturing vital signs on infants, and an augmented door that is able to capture and send messages when the user is out of office.


Advances in Human Factors and Systems Interaction. AHFE 2017. Advances in Intelligent Systems and Computing, vol 592. Springer, Cham | 2017

Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces

Gustavo López; Luis Quesada; Luis A. Guerrero

Natural User Interfaces (NUI) are supposed to be used by humans in a very logic way. However, the run to deploy Speech-based NUIs by the industry has had a large impact on the naturality of such interfaces. This paper presents a usability test of the most prestigious and internationally used Speech-based NUI (i.e., Alexa, Siri, Cortana and Google’s). A comparison of the services that each one provides was also performed considering: access to music services, agenda, news, weather, To-Do lists and maps or directions, among others. The test was design by two Human Computer Interaction experts and executed by eight persons. Results show that even though there are many services available, there is a lot to do to improve the usability of these systems. Specially focused on separating the traditional use of computers (based on applications that require parameters to function) and to get closer to real NUIs.


Sensors | 2015

Creating TUIs Using RFID Sensors—A Case Study Based on the Literacy Process of Children with Down Syndrome

Janio Jadán-Guerrero; Luis A. Guerrero; Gustavo López; Doris Cáliz; José Bravo

Teaching children with intellectual disabilities is a big challenge for most parents and educators. Special education teachers use learning strategies to develop and enhance motivation for complex learning tasks. Literacy acquisition is an essential and life-long skill for a child with intellectual disabilities. In this context, technology can support specific strategies that will help children learn to read. This paper introduces a Tangible User Interface (TUI) system based on Radio Frequency Identification (RFID) technology to support literacy for children with Down syndrome. Our proposed system focuses on the integration of RFID tags in 3D printed objects and low cost toys. The paper describes the experience of using some materials covering the tags and the different problems related to the material and distance of radio wave propagation. The results of a preliminary evaluation in a special education institution showed that the system helps to improve the interaction between teachers and children. The use of a TUI seems to give a physical sensory experience to develop literacy skills in children with Down syndrome.


Future Generation Computer Systems | 2014

Clairvoyance: A framework to integrate shared displays and mobile computing devices

Christian Berkhoff; Sergio F. Ochoa; José A. Pino; Jesús Favela; Jonice Oliveira; Luis A. Guerrero

Supporting formal and informal meetings with digital information and ubiquitous software systems every day becomes increasingly mandatory. These meetings require that the integration of devices participating in the meeting and the information flow among them should be done as seamless as possible to avoid jeopardizing the natural interactions among participants. Trying to contribute to address such a challenge, this article presents a framework that allows devices integration and smooth information flow. This framework, named Clairvoyance, particularly integrates mobile computing devices and large-screen TVs through a mobile ad hoc network, and thus it eases the implementation of shared displays intended to be used in formal and informal meetings. Clairvoyance provides a set of services through an API, which can be used to develop ubiquitous applications that support meetings in particular scenarios. The preliminary evaluation of this framework considered its usage to implement a ubiquitous system that supports social meetings among friends or relatives. According to developers, the framework is easy to use and it provided all required services for such an application. The solution obtained was then utilized by end-users in simulated meetings. The evaluation results indicate that the Clairvoyance services were suitable to support the informal meetings, and that the devices integration and information flow were transparent for the end-users.


ubiquitous computing | 2015

Sign Language Recognition Using Leap Motion

Luis Quesada; Gustavo López; Luis A. Guerrero

Several million people around the world use signs as their main mean of communication. The advances in technologies to recognize such signs will make possible the computer supported interpretation of sign languages. There are more than 137 different sign language around the world; therefore, a system that interprets those languages could be beneficial to all, including the Deaf Community. This paper presents a system based on a hand tracking device called Leap Motion, used for signs recognition. The system uses a Support Vector Machine for sign classification. We performed three different evaluations of our system with over 24 people.


ubiquitous computing | 2014

Notifications for Collaborative Documents Editing

Gustavo López; Luis A. Guerrero

In a collaborative writing session one of the most important activities is the notification to all the collaborators about changes in documents (data awareness). In this paper we propose the use of an augmented object as the mechanism to notify changes in shared documents. In this way, collaborators can be aware of modifications even if they are not in front of the computer. A prototype was implemented and evaluated. The augmented object prototype can be used with the Google Docs suite, which permits the collaborators to work in a distributed and asynchronous way.


Archive | 2016

Ubiquitous Notification Mechanism to Provide User Awareness

Gustavo López; Luis A. Guerrero

Awareness could be defined as the knowledge that the user has of a particular activity, either individual or collaborative. Good awareness mechanisms provide information to the user at the right time so that s/he can know what is required for her/him to do before s/he is required to do it. Notification mechanisms are a key factor to provide awareness. Ubiquitous technologies can change the paradigm in which notifications are delivered to users. This paper describes the concept and characteristics of awareness, and how researchers have applied different notification mechanisms to provide it. With the lessons learned from 4 project implementations, we propose a service-based plug-and-play framework that models different notification mechanisms that could be used to provide user awareness.


Mobile Information Systems | 2016

A Mobile Application That Allows Children in the Early Childhood to Program Robots

Kryscia Ramírez-Benavides; Gustavo López; Luis A. Guerrero

Children born in the Information Age are digital natives; this characteristic should be exploited to improve the learning process through the use of technology. This paper addresses the design, construction, and evaluation process of TITIBOTS, a programming assistance tool for mobile devices that allows children in the early childhood to create programs and execute them using robots. We present the results of using TITIBOTS in different scenarios with children between 4 and 6 years old. The insight obtained in the development and evaluation of the tool could be useful when creating applications for children in the early childhood. The results were promising; children liked the application and were willing to continue using it to program robots to solve specific tasks, developing the skills of the 21st century.

Collaboration


Dive into the Luis A. Guerrero's collaboration.

Top Co-Authors

Avatar

Gustavo López

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Luis Quesada

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gustavo López

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Daniel Calvo

University of Costa Rica

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Arias

University of Costa Rica

View shared research outputs
Researchain Logo
Decentralizing Knowledge