Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gorka Azkune is active.

Publication


Featured researches published by Gorka Azkune.


Sensors | 2015

Combining Users’ Activity Survey and Simulators to Evaluate Human Activity Recognition Systems

Gorka Azkune; Aitor Almeida; Diego López-de-Ipiña; Liming Chen

Evaluating human activity recognition systems usually implies following expensive and time-consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a novel evaluation methodology to overcome the enumerated problems, which is based on surveys for users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Important aspects, such as sensor noise, varying time lapses and user erratic behaviour, can also be simulated using the tool. The proposed methodology is shown to have very important advantages that allow researchers to carry out their work more efficiently. To evaluate the approach, a synthetic dataset generated following the proposed methodology is compared to a real dataset computing the similarity between sensor occurrence frequencies. It is concluded that the similarity between both datasets is more than significant.


IEEE Conf. on Intelligent Systems (1) | 2015

A Knowledge-Driven Tool for Automatic Activity Dataset Annotation

Gorka Azkune; Aitor Almeida; Diego López-de-Ipiña; Liming Chen

Human activity recognition has become a very important research topic, due to its multiple applications in areas such as pervasive computing, surveillance, context-aware computing, ambient assistive living or social robotics. For activity recognition approaches to be properly developed and tested, annotated datasets are a key resource. However, few research works deal with activity annotation methods. In this paper, we describe a knowledge-driven approach to annotate activity datasets automatically. Minimal activity models have to be provided to the tool, which uses a novel algorithm to annotate datasets. Minimal activity models specify action patterns. Those actions are directly linked to sensor activations, which can appear in the dataset in varied orders and with interleaved actions that are not in the pattern itself. The presented algorithm finds those patterns and annotates activities accordingly. Obtained results confirm the reliability and robustness of the approach in several experiments involving noisy and changing activity executions.


Pervasive and Mobile Computing | 2017

MASSHA: An agent-based approach for human activity simulation in intelligent environments

Oihane Kamara-Esteban; Gorka Azkune; Ander Pijoan; Cruz E. Borges; Ainhoa Alonso-Vicario; Diego López-de-Ipiña

Abstract Human activity recognition has the potential to become a real enabler for ambient assisted living technologies. Research on this area demands the execution of complex experiments involving humans interacting with intelligent environments in order to generate meaningful datasets, both for development and validation. Running such experiments is generally expensive and troublesome, slowing down the research process. This paper presents an agent-based simulator for emulating human activities within intelligent environments: MASSHA. Specifically, MASSHA models the behaviour of the occupants of a sensorised environment from a single-user and multiple-user point of view. The accuracy of MASSHA is tested through a sound validation methodology, providing examples of application with three real human activity datasets and comparing these to the activity datasets produced by the simulator. Results show that MASSHA can reproduce behaviour patterns that are similar to those registered in the real datasets, achieving an overall accuracy of 93.52% and 88.10% in frequency and 98.27% and 99.09% in duration for the single-user scenario datasets; and a 99.3% and 88.25% in terms of frequency and duration for the multiple-user scenario.


ubiquitous computing | 2014

A Hybrid Evaluation Methodology for Human Activity Recognition Systems

Gorka Azkune; Aitor Almeida; Diego López-de-Ipiña; Liming Luke Chen

Evaluating human activity recognition systems usually implies following expensive and time consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a hybrid evaluation methodology to overcome the enumerated problems. Central to the hybrid methodology are surveys to users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Sensor noise, varying time lapses and user erratic behaviour can also be simulated using the tool. The hybrid methodology is shown to have very important advantages that allow researchers carrying out their work more efficiently.


Wireless Communications and Mobile Computing | 2017

Vision-Based Fall Detection with Convolutional Neural Networks

Adrián Núñez-Marcos; Gorka Azkune; Ignacio Arganda-Carreras

One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.


Sensors | 2013

Semantic Framework for Social Robot Self-Configuration

Gorka Azkune; Pablo Orduña; Xabier Laiseca; Eduardo Castillejo; Diego López-de-Ipiña; Miguel Loitxate; Jon Azpiazu

Healthcare environments, as many other real world environments, present many changing and unpredictable situations. In order to use a social robot in such an environment, the robot has to be prepared to deal with all the changing situations. This paper presents a robot self-configuration approach to overcome suitably the commented problems. The approach is based on the integration of a semantic framework, where a reasoner can take decisions about the configuration of robot services and resources. An ontology has been designed to model the robot and the relevant context information. Besides rules are used to encode human knowledge and serve as policies for the reasoner. The approach has been successfully implemented in a mobile robot, which showed to be more capable of solving situations not pre-designed.


conference towards autonomous robotic systems | 2011

A navigation system for a high-speed professional cleaning robot

Gorka Azkune; Mikel Astiz; Urko Esnaola; Unai Antero; Jose Vicente Sogorb; Antonio Alonso

This paper describes an approach to automate professional floor cleaning tasks based on a commercial platform. The described navigation system works in indoor environments where no extra infrastructure is needed and with no previous knowledge of it. A teach&reproduce strategy has been adopted for this purpose. During teaching, the robot maps its environment and the cleaning path. During reproduction, the robot uses a new motion planning algorithm to follow the taught path whilst avoiding obstacles suitably. The new motion planning algorithm is needed due to the special platform and operational requirements. The system presented here is focused on achieving human comparable performance and safety.


ubiquitous computing | 2017

Inter-activity Behaviour Modelling Using Long Short-Term Memory Networks

Aitor Almeida; Gorka Azkune

As the average age of the urban population increases, cities must adapt to improve the quality of life of their citizens. The City4Age H2020 project is working on the early detection of the risks related to Mild Cognitive Impairment and Frailty and on providing meaningful interventions that prevent those risks. As part of the risk detection process we have developed a multilevel conceptual model that describes the user behaviour using actions, activities, intra-activity behaviour and inter-activity behaviour. Using that conceptual model we have created a deep learning architecture based on Long Short-Term Memory Networks that models the inter-activity behaviour. The presented architecture offers a probabilistic model that allows to predict the users next actions and to identify anomalous user behaviours.


2017 IEEE 3rd International Forum on Research and Technologies for Society and Industry (RTSI) | 2017

Activity recognition approaches for smart cities: The City4Age use case

Aitor Almeida; Gorka Azkune

Activity Recognition is an important ingredient that allows the interpretation of elementary data. Understanding which activity is going on allows framing an elementary action (e.g. “a movement”) in a proper context. This paper presents an activity recognition system designed to work in urban scenarios, which impose several restrictions: the unfeasibility of having enough annotated datasets, the heterogeneous sensor infrastructures and the presence of very different individuals. The main idea of our system is to combine knowledge- and data-driven techniques, to build a hybrid and scalable activity recognition system for smart cities.


international workshop on ambient assisted living | 2012

Semantic based self-configuration approach for social robots in health care environments

Gorka Azkune; Pablo Orduña; Xabier Laiseca; Diego López-de-Ipiña; Miguel Loitxate

Health care environments, as many other real world environments, present many changing and unpredictable situations. In order to use a social robot in such an environment, the robot has to be prepared to deal with all the changing situations. This paper presents a robot self-configuration approach to overcome suitably the commented problems. The approach is based on the integration of a semantic framework, where a reasoner can take decisions about the configuration of robot services and resources. An ontology has been designed to model the robot and the relevant context information. Besides rules are used to encode human knowledge and serve as policies for the reasoner. The approach has been successfully implemented in a mobile robot, which showed to be more capable of solving not pre-designed situations.

Collaboration


Dive into the Gorka Azkune's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liming Chen

De Montfort University

View shared research outputs
Top Co-Authors

Avatar

Jose Manuel Lopez-Guede

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge