Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julián Flores is active.

Publication


Featured researches published by Julián Flores.


PLOS ONE | 2012

Virtual Reality as a Tool for Evaluation of Repetitive Rhythmic Movements in the Elderly and Parkinson's Disease Patients

Pablo Arias; Verónica Robles-García; Gabriel Sanmartín; Julián Flores; Javier Cudeiro

This work presents an immersive Virtual Reality (VR) system to evaluate, and potentially treat, the alterations in rhythmic hand movements seen in Parkinsons disease (PD) and the elderly (EC), by comparison with healthy young controls (YC). The system integrates the subjects into a VR environment by means of a Head Mounted Display, such that subjects perceive themselves in a virtual world consisting of a table within a room. In this experiment, subjects are presented in 1st person perspective, so that the avatar reproduces finger tapping movements performed by the subjects. The task, known as the finger tapping test (FT), was performed by all three subject groups, PD, EC and YC. FT was carried out by each subject on two different days (sessions), one week apart. In each FT session all subjects performed FT in the real world (FTREAL) and in the VR (FTVR); each mode was repeated three times in randomized order. During FT both the tapping frequency and the coefficient of variation of inter-tap interval were registered. FTVR was a valid test to detect differences in rhythm formation between the three groups. Intra-class correlation coefficients (ICC) and mean difference between days for FTVR (for each group) showed reliable results. Finally, the analysis of ICC and mean difference between FTVR vs FTREAL, for each variable and group, also showed high reliability. This shows that FT evaluation in VR environments is valid as real world alternative, as VR evaluation did not distort movement execution and detects alteration in rhythm formation. These results support the use of VR as a promising tool to study alterations and the control of movement in different subject groups in unusual environments, such as during fMRI or other imaging studies.


Parkinsonism & Related Disorders | 2016

Effects of movement imitation training in Parkinson's disease: A virtual reality pilot study

Verónica Robles-García; Yoanna Corral-Bergantiños; Nelson Espinosa; Carlos García-Sancho; Gabriel Sanmartín; Julián Flores; Javier Cudeiro; Pablo Arias

BACKGROUND Hypometria is a clinical motor sign in Parkinsons disease. Its origin likely emerges from basal ganglia dysfunction, leading to an impaired control of inhibitory intracortical motor circuits. Some neurorehabilitation approaches include movement imitation training; besides the effects of motor practice, there might be a benefit due to observation and imitation of un-altered movement patterns. In this sense, virtual reality facilitates the process by customizing motor-patterns to be observed and imitated. OBJECTIVE To evaluate the effect of a motor-imitation therapy focused on hypometria in Parkinsons disease using virtual reality. METHODS We carried out a randomized controlled pilot-study. Sixteen patients were randomly assigned in experimental and control groups. Groups underwent 4-weeks of training based on finger-tapping with the dominant hand, in which imitation was the differential factor (only the experimental group imitated). We evaluated self-paced movement features and cortico-spinal excitability (recruitment curves and silent periods in both hemispheres) before, immediately after, and two weeks after the training period. RESULTS Movement amplitude increased significantly after the therapy in the experimental group for the trained and un-trained hands. Motor thresholds and silent periods evaluated with transcranial magnetic stimulation were differently modified by training in the two groups; although the changes in the input-output recruitment were similar. CONCLUSIONS This pilot study suggests that movement imitation therapy enhances the effect of motor practice in patients with Parkinsons disease; imitation-training might be helpful for reducing hypometria in these patients. These results must be clarified in future larger trials.


Computers & Geosciences | 2013

GeoDADIS: A framework for the development of geographic data acquisition and dissemination servers

Sebastián Villarroya; José Ramon Rios Viqueira; José Manuel Cotos; Julián Flores

The homogeneous access to sensor data in data monitoring and analysis applications is gaining much interest nowadays. To tackle this problem from an application independent perspective, the design and implementation of a framework called GeoDADIS for the development of data acquisition and dissemination servers is discussed in the present paper. Those servers are of common use in monitoring applications as they perform as gateways between decision support and data visualization technologies used in application developments and the heterogeneous collection of protocols and interfaces available in the industrial area for sensor data access. To achieve its objective, the architecture of GeoDADIS consists of: (i) a bottommost data acquisition layer that communicates with sensors, (ii) a middle kernel layer that provides general purpose functionality related to data management and system control and (iii) a topmost external interaction layer that enables the access from applications. The frameworks design does extensive use of the adapter (wrapper) design pattern to ease the incorporation of new data acquisition channels at the data acquisition layer and new data and remote control services in the external interaction layer. This makes GeoDADIS a very flexible and general purpose tool with broad application in many data monitoring domains.


articulated motion and deformable objects | 2012

Motion capture for clinical purposes, an approach using primesense sensors

Gabriel Sanmartín; Julián Flores; Pablo Arias; Javier Cudeiro; Roi Méndez

Virtual Reality (VR) is the computer recreation of simulated environments that create on the user a sense of physical presence on them. VR provides the advantages of being highly flexible and controllable, allowing experts to generate the optimal conditions for any given test and isolating any desired variables in the course of an experiment. An important characteristic of VR is that it allows interaction within the virtual world. Motion capture is one of the most popular technologies, because it contributes to creating in the subject the required sense of presence. There are several methods to incorporate these techniques into VR system, with the challenge of them not being too invasive. We propose a method using PrimeSense sensors and several well-known computer vision techniques to build a low-cost mocap system that has proven to be valid for clinical needs, in its application as a support therapy for Parkinsons disease (PD) patients.


international conference on human computer interaction | 2016

Preliminary evaluation of the Kinect V2 sensor for its use in virtual TV sets with natural interaction

Roi Méndez; Julián Flores; Enrique Castelló; Rubén Arenas

A virtual TV set combines the real and virtual worlds to obtain an image in which the real elements give a sense of presence in a computer generated environment. One of the key elements for a credible mix is the interaction between the actors and the virtual world. In this paper, a preliminary study analyzing the viability of the Microsoft Kinect V2 sensor for its use in natural gesture detection in virtual TV sets, is presented. The learning of the system, through artificial intelligence techniques, of a series of natural gestures from the presenter, is proposed. This should allow the actor to create his own interaction with the environment reducing the training time needed before recording a TV show. Eight users with different levels of experience in virtual environments have created a series of intuitive natural gestures (using the Kinect V2) that have been used to evaluate the suitability of the sensor for this purpose, showing promising results.


international conference on human computer interaction | 2018

Multimodal User Interaction for GIS Applications (MUI_GIS)

Zaid Mustafa; Julián Flores; José Manuel Cotos

The traditional communication methods for GIS applications limit usability and slow down the interaction that happens when using conventional devices such as a mouse and a keyboard. However, the human voice and motion system can naturally create actions through the use of speech or gestures in three dimensions. This paper presents a new human-computer interaction tool called MUI_GIS (multimodal-user interaction for GIS) applications, which uses a synergistic operation of several kinds of input devices in order to allow a natural interaction. The information from the different input devices determines actions/commands of interaction with the software. The tool focuses on two kinds of users, general and expert, who have the ability to do basic operations or advance analysis of the data stored, such as virtual archeological geo-information, and get knowledge using a computer without any charge or the trouble of traveling. Thus, the features of this tool are helping to find a new generation conscious toward the value of geographic information systems, which is positively reflected in the communities. In addition, this will facilitate the applying of GISs in the education domain and help users to understand the structure of 3-D visual exploration.


international conference on human computer interaction | 2018

Increasing executive capacities through the use of interactive tools based on gestures: Case study

Tareq Alzubi; Raquel Fernández; Julián Flores; Montserrat Duran; Manuel Cotos

ACM Reference Format: TAREQ ALZUBI, RAQUEL FERNÁNDEZ, JULIÁN FLORES, MONTSERRAT DURAN, and MANUEL COTOS. 2018. Increasing executive capacities through the use of interactive tools based on gestures. Case study. . In Interacción 2018: XIX International Conference on Human Computer Interaction, September 12-14, 2018, Palma, Spain. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3233824.3233826


Multimedia Tools and Applications | 2018

GeoHbbTV: A framework for the development and evaluation of geographic interactive TV contents

David Luaces; José Ramon Rios Viqueira; Pablo Gamallo; David Mera; Julián Flores

Synchronizing TV contents with applications is a topic that has gained much interest during the last years. Reaching the viewers through various channels (TV, web, mobile devices, etc.) has shown to be a means for increasing the audience. Related to the above, the hybrid TV standard HbbTV (Hybrid Broadcast Broadband TV) synchronizes the broadcast of video and audio with applications that may be delivered through either the broadcast channel or a broadband network. Thus, HbbTV applications may be developed to provide contextual information for emitted TV shows and advertisements. This paper reports on the integration of the automatic generation of geographic focus of text content with interactive TV. In particular it describes a framework for the incorporation of geographic context to TV shows and its visualization through HbbTV. To achieve this, geographic named entities are first extracted from the available subtitles and next the spatial extension of those entities is used for the production of context maps. An evaluation strategy has been devised and used to test alternative prototype implementations for TV newscast in Spanish language. Finally, to go beyond the initial solution proposed, some challenges for future research are also discussed.


Multimedia Tools and Applications | 2018

New distributed virtual TV set architecture for a synergistic operation of sensors and improved interaction between real and virtual worlds

Roi Méndez; Julián Flores; Enrique Castelló; José Ramon Rios Viqueira

A virtual TV set is a studio that is able to combine recorded actors and objects with computer generated virtual environments in real time. In order to achieve this combination seamlessly, in an ideal configuration, several elements such as cameras, objects and people should be tracked so that all their actions on the stage have a corresponding effect in the virtual world. However, in the actual professional virtual TV sets, the tracking possibilities are quite limited because of the hardware and software architecture used, which has not had a major evolution since the first prototypes presented in the nineties. This traditional architecture uses to be rigid, including just one monolithic tracking system and low levels of interactivity. In this paper, a new distributed, flexible and scalable hardware and software architecture that allows the inclusion of multiple kinds of devices in parallel is introduced. It breaks with the traditional structure of the virtual TV sets, opening the technology to an easier inclusion of new devices without the need of updating the proprietary software of the set, thus facilitating its future evolution. The design, implementation and test of this architecture, through the adaptation of a traditional virtual TV set, is presented. The tests are developed through the inclusion of modern devices (in our case Optitrack infrared cameras, Microsoft Kinect V2 and Leap Motion) that, through a synergistic operation, allow the system to solve some traditional drawbacks of this technology such as free and multiple object and camera tracking, presenter natural interaction and automatic distance keying.


Universal Access in The Information Society | 2017

Natural interaction in virtual TV sets through the synergistic operation of low-cost sensors

Roi Méndez; Julián Flores; Enrique Castelló; José Ramon Rios Viqueira

AbstractA virtual TV set combines images from the real world with a virtual environment in order to obtain images that give the impression of the real elements, such as actors or physical objects, being present in a computer-generated scene. Thus, the audience has the feeling of the talents being present in a place where they are not. One of the most relevant aspects to obtain a good sense of presence on stage is the capability of the actors to interact, in real time, with the virtual world. To make this possible, it is necessary to track the body of the presenters and detect their gestures so that they can modify the synthetic environment in a natural way. In this paper, a study that analyzes the feasibility of the synergistic use of several sensors to improve the interaction of the actors with the scene, mainly focusing on natural gesture detection, is presented. A new workflow, based on the learning of natural gestures by the system through artificial intelligence techniques in order to use them during live broadcasts, is proposed. Using this approach in the pre-production process, the actors are able to create their own custom paradigm of interaction with the virtual environment, thus increasing the naturalism of their behavior during live broadcasts and reducing the training time needed for new productions.

Collaboration


Dive into the Julián Flores's collaboration.

Top Co-Authors

Avatar

Roi Méndez

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Gabriel Sanmartín

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

José Ramon Rios Viqueira

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Antonio Otero

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Enrique Castelló

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Pablo Arias

University of A Coruña

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Manuel Cotos

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Rubén Arenas

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

David Mera

University of Santiago de Compostela

View shared research outputs
Researchain Logo
Decentralizing Knowledge