Raquel Viciana-Abad
University of Jaén
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raquel Viciana-Abad.
IEEE Transactions on Computational Intelligence and Ai in Games | 2013
Jozef Legény; Raquel Viciana-Abad; Anatole Lécuyer
Brain-computer interfaces (BCIs) are becaming more available to the general public, and have already been used to control applications such as computer games. One disadvantage is that they are not completely reliable. In order to increase BCI performances, some low-level adjustments can be made, such as signal processing, as well as high level adjustments such as modifying the controller paradigm. In this study, we explore a novel, context-dependant, approach for a steady-state visual-evoked potential (SSVEP)-based BCI controller. This controller uses two kinds of behavior alternation: commands can be added and removed if their use is irrelevant to the context and the actions resulting from their activation can be weighted depending on the likeliness of the actual intention of the user. This controller has been integrated within a BCI computer game and its influence in performance and mental workload has been addressed through a pilot experiment. Preliminary results have shown a workload reduction and performance improvement with the context-dependent controller, while keeping the engagement levels untouched.
Sensors | 2014
Raquel Viciana-Abad; Rebeca Marfil; José Manuel Pérez-Lorenzo; Juan Pedro Bandera; Adrián Romero-Garcés; Pedro Reche-Lopez
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.
Multimedia Tools and Applications | 2011
Raquel Viciana-Abad; Arcadio Reyes-Lecuona; Matthieu Poyade; José Escolano
It is generally understood that virtual reality simulations have a high computational cost. Hence, they rarely can reduce completely all the incoherence within the cross-modal sensory outputs provided. The main research approaches to date have consisted in technically reducing possible mismatches, however minimal research has been conducted so as to analyse their influence on human capabilities. Thus, the objective of this study is to provide further insights to the designers of virtual reality about the negative influence of simulation lags and interesting design implications. To clearly show this, we have investigated the importance of coherent sensory feedback by incorporating time delays and spatial misalignments in the feedback provided by the simulation as a response to participant´s actions to mimic computationally expensive environments. We have also evaluated these misalignments considering two typical interaction setups. In particular, the sensory mismatches influence has been assessed in human factors, such as the sense of presence, task performance and delay perception. Our experimental results indicate that the closer the interaction conditions are to real configurations the higher the sensory requirements are regarding accuracy. The implications of this study offer the designer guidelines to prioritise the reduction of those mismatches in the sensory cues provided depending on the simulations goals.
Archive | 2009
Matthieu Poyade; Arcadio Reyes-Lecuona; Raquel Viciana-Abad
In this chapter, an experimental study is presented for evaluating the importance of binocular disparity in depth perception within a Virtual Environment (VE), which is assumed to be critical in many manipulation tasks. In this research work, two assumptions are made: Size cues strongly contaminate depth perception mechanisms and binocular disparity optimizes depth perception for manipulation tasks in VE. The results outline size cues as possible cause of depth perception degradation and binocular disparity as an important factor in depth perception, whose influence is altered by the position within a VE.
international conference on human-computer interaction | 2009
Matthieu Poyade; Arcadio Reyes-Lecuona; Simo-Pekka Leino; Sauli Kiviranta; Raquel Viciana-Abad; Salla Lind
Haptics is the outstanding technology to provide tri-dimensional interaction within Virtual Environments (VE). Nevertheless, many software solutions are not fully prepared to support Haptics. This paper presents a user-friendly implementation of Sensable Phantom haptic interfaces onto the interactive VE authoring platform, Virtools 4.0. Haptics implementation was realized using the Haptic Library (HLAPI) from OpenHaptics toolkit 2.0 which provides highly satisfactory custom forces effects. The integration of Phantom interaction at end-user development fulfils logical VE interactive authoring under Virtools. Haptics implementation was qualitatively assessed in a manual maintenance case, a welding task, as a part of the national Finnish project, VIRVO. Manipulation enhancements provided by the integration of Phantom interaction in Virtools suggest many further improvements for more complicated industrial pilot experiments as a part of the European Commission funded project ManuVAR.
Multimedia Tools and Applications | 2014
Raquel Viciana-Abad; Arcadio Reyes-Lecuona; Alejandro Rosa-Pujazón; José Manuel Pérez-Lorenzo
For some applications based on virtual reality technology, presence and task performance are important factors to validate the experience. Different approaches have been adopted to analyse the extent to which certain aspects of a computer-generated environment may enhance these factors, but mainly in 2D graphical user interfaces. This study explores the influence of different sensory modalities on performance and the sense of presence experienced within a 3D environment. In particular, we have evaluated visual, auditory and active haptic feedback for indicating selection of virtual objects. The effect of spatial alignment between proprioceptive and visual workspaces (co-location) has also been analysed. An experiment has been made to evaluate the influence of these factors in a controlled 3D environment based on a virtual version of the Simon game. The main conclusions obtained indicate that co-location must be considered in order to determine the sensory needs during interaction within a virtual environment. This study also provides further evidence that the haptic sensory modality influences presence to a higher extent, and that auditory cues can reduce selection times. Conclusions obtained provide initial guidelines that will help designers to set out better selection techniques for more complex environments, such as training simulators based on VR technology, by highlighting different optimal configurations of sensory feedback.
International Journal of Innovation and Learning | 2012
Raquel Viciana-Abad; Jose Enrique Munoz-Exposito; José Manuel Pérez-Lorenzo; S. García-Galán; Fernando Parra-Rodríguez
The process of adapting methodologically to European Credit Transfer System suffers from a lack in practical evaluations within the engineering field. One of the main competencies within the studies of telematics engineering is the development of skills related to behaving as technical consultants. This competency has been traditionally developed via publishing additional material through learning management systems; however, the approach followed within this study has promoted its development through the creation of practical guides within a wiki. The evaluation of this activity with students of different courses is presented herein, providing certain guidelines about its use as a support system for autonomous learning.
Engineering Applications of Artificial Intelligence | 2018
Pedro Reche-Lopez; José Manuel Pérez-Lorenzo; F. Rivas; Raquel Viciana-Abad
Abstract In this work a method for an unsupervised lateral localization of simultaneous sound sources is presented. Following a binaural approach, the kurtosis-driven split-EM algorithm (KDS-EM) implemented is able to estimate the direction of arrival of relevant sound sources without knowing a priori their number. Information about the localization is integrated within a period of observation time to serve as an auditory memory in the context of social robotics. Experiments have been conducted using two types of observation times, one shorter with the purpose of analyzing its performance in a reactive level, and other longer that allows the analysis of its contribution as an input of the building process of the sorroundings auditory models that servesto drive a more deliberative behavior. The system has been tested in real and reverberant environments, achieving a good performance based on an over-modeling process that is able to isolate the location of the relevant sources from adverse acoustic effects, such as reverberations.
Robot | 2017
Antonio Bandera; Juan Pedro Bandera; Pablo Bustos; Fernando Fernández; Ángel García-Olaya; Javier García-Polo; Ismael García-Varea; Luis J. Manso; Rebeca Marfil; Jesus Martínez-Gómez; Pedro Núñez; José Manuel Pérez-Lorenzo; Pedro Reche-Lopez; Cristina Romero-González; Raquel Viciana-Abad
The goal of the LifeBots project is the study and development of long-life mechanisms that facilitate and improve the integration of robotics platforms in smart homes to support elder and handicapped people. Specifically the system aims to design, build and validate an assistive ecosystem formed by a person living in a smart home with a social robot as her main interface to a gentler habitat. Achieving this goal requires the use and integration of different technologies and research areas, but also the development of the mechanisms in charge of providing an unified, pro-active response to the user’s needs. This paper describes some of the mechanisms implemented within the cognitive robotics architecture CORTEX that integrates deliberative and reactive agents through a common understanding and internalizing of the outer reality, which materializes in a shared representation derived from a formal graph grammar.
Applied Acoustics | 2012
José Manuel Pérez-Lorenzo; Raquel Viciana-Abad; Pedro Reche-Lopez; F. Rivas; José Escolano